@AndreaIppolitoIppo

That's not the future, that's the present already, and boy does it wonders ❤

@Matt_Pan

Why is Virtual threads the solution for the future? It's fully released since jdk21 and with jdk24 the thread pinning issue is resolved now as well, I have been using Virtual Threads for a while now

@michaelschneider603

A blocking call after 250 short requests and responses in a row - Congratulations! :-)

@mytvhome7394

What is better? Reactive programming or Virtual threads ?

@HenrykZ

Response speed depends on:
1. Request Strategy.
2. Network speed: Bandwidth and latency.
3. Server performance: Processing speed and load.
4. Data format/volume: Efficient formats and smaller sizes.
5. Compression: Brotli is faster than Gzip.
6. Protocols: HTTP/2/3 are faster than HTTP/1.1.
7. Caching: Reduces computation time.

Request Strategy with Proxy Rotator: Regularly rotate proxies to avoid IP blocking and implement retry logic with backoff. Avoid blocking code by setting "evaluated" (dynamic) delay times between requests instead of "fixed" ones when the proxy selector rotates. This approach allows measuring the last usage time of each proxy, which is particularly useful when using a large list of proxies, as it prevents inefficiency when a proxy hasn’t been utilized within the delay window.

( Conclusion (Recommendation))
Highly suitable if:
- You use many proxies (e.g., more than 50).
- You need to bypass IP-based rate limits.
- You aim for high concurrency without introducing a global delay.
Less suitable if:
- You work with a small number of proxies.
- The target servers do not enforce IP-based rate limits.
- You prefer a simple and maintainable solution.
🔧 Recommendation: Use the per-proxy delay strategy for large-scale scraping projects with strict IP rate limits. For smaller scenarios, a global fixed delay is sufficient.


✅ Advantages (Per-Proxy Delay Strategy)
- Per-IP rate limit control: Prevents rapid reuse of the same proxy, ideal for servers with IP-based throttling.
- Maximizes proxy efficiency: Idle proxies can be reused immediately without unnecessary global delay.
- Higher throughput with large proxy pools: Avoids bottlenecks common with fixed delays across all proxies.
- Prevents proxy queue congestion: Reduces the risk of over-waiting when proxies are already eligible again.
- Fair load distribution: Proxies are used more evenly over time, reducing wear on specific endpoints.

❌ Disadvantages
- State tracking overhead: Requires maintaining and updating lastUsedTimestamp per proxy with thread safety in mind.
- Still blocking (virtual threads): Thread.sleep() blocks the virtual thread for the calculated delay, even if minimal.
- Low benefit for small proxy sets: With few proxies and moderate delay, actual gain is negligible — most delays will be zero.
- Doesn’t enforce real rate limits: Enforces only a minimum spacing per proxy, not full request rate per time window.
- Increased implementation complexity: Adds logic and state management compared to a simple fixed-delay approach.

@georgetsiklauri

What?.. "something you want to avoid"? Why?.. there are numerous, hundreds of cases, when you NEED to have a blocking call. Generalising everything like "you want to avoid blocking call" is ridiculously wrong. Why not to remove then the sequential execution at all? Let's only have async calls.. this is so wrong.