Spring Framework version 5, released in Sept 2017, introduced Spring WebFlux. A fully reactive stack. In Dec 2019 Spring Data R2DBC was released, an incubator to integrate relational databases using a reactive driver. In this blog post I’ll show that at high concurrency, WebFlux and R2DBC perform better. They have better response times and higher throughput. As additional benefits, they use less memory and CPU per request processed and (when leaving out JPA in case of R2DBC) your fat JAR becomes a lot smaller. At high concurrency using WebFlux and R2DBC (if you do not need JPA) is a good idea!
In this blog post I’ve looked at 4 implementations
- Spring Web MVC + JDBC database driver
- Spring Web MVC + R2DBC database driver
- Spring WebFlux + JDBC database driver
- Spring WebFlux + R2DBC database driver
I’ve varied the number of requests in progress (concurrency) from 4 to 500 in steps of 50 and assigned 4 cores to the load generator and to the service (my laptop has 12 cores). I’ve configured all connection pools to be 100. Why a fixed number of cores and connection pool size? In a previous exploration of JDBC vs R2DBC data changing those variables did not provide much additional insight so I decided to keep them fixed for this test reducing the duration of my test run by several factors.
I did a GET request on the service. The service fetched 10 records from the database and returned them as JSON. First I ‘primed’ the service for 2 seconds by putting heavy load on the service. Next I started with a 2 minute benchmark. I repeated every scenario 20 times (separated by other tests, so not 5 times after each other) and averaged the results. I only looked at the runs which did not cause errors. When I increased concurrency to more than 1000, the additional concurrent requests failed without exception for all implementations. The results appeared reproducible.
As back end database I’ve used Postgres (12.2). I used wrk to benchmark the implementations (because of several recommendations). I parsed wrk output using the following here. I measured
- Response time
As reported by wrk
- Throughput (number of requests)
As reported by wrk
- Process CPU usage
User and kernel time (based on /proc/PID/stat)
- Memory usage
Private and shared process memory (based on /proc/PID/maps)
You can view the test script used here. You can view the implementations used here.
You can view the raw data which I used for the graphs here.
It is clear that at higher concurrency, Spring Web MVC + JDBC might not be your best choice. R2DBC clearly gives the better response times at higher concurrency. Spring WebFlux also does better than a similar implementation using Spring Web MVC.
Similar to the response times, Spring Web MVC with JDBC starts doing worse at higher concurrency. Again R2DBC clearly does best. Moving from Spring Web MVC to Spring WebFlux is not a good idea if your backend is still JDBC. At low concurrency, Spring Web MVC + JDBC does best.
CPU was measured as CPU time during the entire run, the sum of process user and kernel time.
Web MVC with JDBC used most CPU at high concurrency. WebFlux with JDBC used least but also had the lowest throughput. When you look at the CPU used per request processed, you get a measure of efficiency:
R2DBC uses less CPU per request processed than JDBC. WebFlux with JDBC appears (again) not to be a good idea. Web MVC with JDBC gets worse at high concurrency while the other implementations which have at least one non-blocking component, appear more stable. At low concurrency however Web MVC + JDBC can make most efficient use of the available CPU.
Memory was measured as process private memory at the end of the run. Memory usage is garbage collection dependent. G1GC was used on JDK 11.0.6. Xms was 0.5 Gb (default 1/64 of my available 32 Gb). Xmx was 8 Gb (default 1/4 of my available 32 Gb).
WebFlux appears to be more stable in its memory usage when compared to Web MVC which has higher memory usage at higher concurrency. When using WebFlux, also using R2DBC gives you least memory usage at high concurrency. At low concurrency, Web MVC + JDBC does best but at higher concurrency, WebFlux + R2DBC uses least memory per processed request.
Fat JAR size
The below graph shows JPA is a big one. If you can’t use it in case of R2DBC, your fat JAR size drops in the order of 15Mb!
R2DBC and WebFlux, a good idea at high concurrency!
- At high concurrency, the benefits of using R2DBC instead of JDBC and WebFlux instead of Web MVC are obvious.
- Less CPU is required to process a single request.
- Less memory required to process a single request.
- Response times at high concurrency are better.
- Throughput at high concurrency is better
- The fat JAR size is smaller (no JPA with R2DBC)
- When using only blocking components, memory and CPU usage will become less efficient at high concurrency
- WebFlux with JDBC does not appear to be a good idea. Web MVC with R2DBC works better at high concurrency than Web MVC with JDBC.
- You’re not required to have a completely non-blocking stack to reap the benefits of using R2DBC. It is however best to combine it with WebFlux in case of Spring.
- At low concurrency (somewhere below 200 concurrent requests), using Web MVC and JDBC, might give better results. Test this to determine your own break-even point!
Some challenges when using R2DBC
- JPA cannot deal with reactive repositories such as provided by Spring Data R2DBC. This means you will have to do more things manually when using R2DBC.
- There are other reactive drivers around such as for example Quarkus Reactive Postgres client (which uses Vert.x). This does not use R2DBC and has different performance characteristics (see here).
- Limited availability
Not all relational databases have reactive drivers available. For example, Oracle does not (yet?) have an R2DBC implementation.
- Application servers still depend on JDBC.
Do people still use those for non-legacy in this Kubernetes age?
- When Java Fibers will be introduced (Project Loom, could be Java 15), the driver landscape might change again and R2DBC might not become JDBCs successor after all.
9 thoughts on “Spring: Blocking vs non-blocking: R2DBC vs JDBC and WebFlux vs Web MVC”
I want to know how non blocking work internally? is there any reference ?How can i do manually debug and check how it working as compared to blocking?
Actually I don’t agree with this results
I had another results https://email@example.com/r2dbc-vs-jdbc-19ac3c99fafa
Even R2DBC owner shared this results https://github.com/r2dbc/r2dbc-postgresql/pull/158
I did every test 20 times and the results were reproducable (with a small standard deviation). There are however many variables involved and it is difficult to directly compare different tests. It would be interesting to try and find why there is a difference in results. For example I assigned 4 CPUs to the load generator and 4 CPUs to the service instead of 1.
Check this https://www.techempower.com/benchmarks/
Hi. Thank you! I used pyplot. It does require writing some code but the graphs are easily reproducible and consistent. You can find the code here: https://github.com/MaartenSmeets/db_perftest/blob/r2dbc/test_scripts/analyze/graphs.ipynb
Great job! What have you used to plot that data into those charts?
Great analysis, but 100 threads in a pool is way too much, according to the nice people that created HikariCP.
They would have suggested something like 10 max for your 4 cores, according to https://github.com/brettwooldridge/HikariCP/wiki/About-Pool-Sizing
Hi. Thank you! I also looked at the effects of different connection pool sizes in the following blog (which was more of a data exploration): https://technology.amis.nl/2020/03/27/performance-of-relational-database-drivers-r2dbc-vs-jdbc/. I noticed though that the connection pool size in these tests appeared to have little effect on the performance. Probably the pool size was not the bottleneck. In actual production environments, I would also try and keep it much lower.
When Java Fibers will be introduced (Project Loom, could be Java 15), the driver landscape might change again and R2DBC might not become JDBCs successor after all.
=> That would be awesome, but I am a bit more skeptic 🙂