Today we're going to benchmark CockroachDB one of database that I use this year to create embedded application. I use CockroachDB because I don't want to use SqLite or any other embedded database that lack of tooling or cannot be accessed by multiple program at the same time. With CockroachDB I only need to distribute my application binary, cockroachdb binary, and that's it, the offline backup also quite simple, just need to rsync the directory, or do manual rows export like other PostgreSQL-like database. Scaling out also quite simple.
Here's the result:
Disk Type | Ins Dur (s) | Upd Dur (s) | Sel Dur (s) | Many Dur (s) | Insert Q/s | Update Q/s | Select1 Q/s | SelMany Row/s | SelMany Q/s |
---|---|---|---|---|---|---|---|---|---|
TMPFS (RAM) | 1.3 | 2.1 | 4.9 | 1.5 | 31419 | 19275 | 81274 | 8194872 | 20487 |
NVME DA 1TB | 2.7 | 3.7 | 5.0 | 1.5 | 15072 | 10698 | 80558 | 8019435 | 20048 |
NVMe Team 1TB | 3.8 | 3.7 | 4.9 | 1.5 | 10569 | 10678 | 81820 | 8209889 | 20524 |
SSD GALAX 250GB | 8.0 | 7.1 | 5.0 | 1.5 | 4980 | 5655 | 79877 | 7926162 | 19815 |
HDD WD 8TB | 32.1 | 31.7 | 4.9 | 3.9 | 1244 | 1262 | 81561 | 3075780 | 7689 |
From the table we can see that TMPFS (RAM, obviously) is the fastest in all case especially insert and update benchmark, NVMe faster than SSD, and standard magnetic HDD is the slowest. but the query-part doesn't really have much effect probably because the dataset too small that all can fit in the cache.
The test done with 100 goroutines, 400 records insert/update per goroutines, the record is only integer and string. Queries done 10x for select, and 300x for select-many, sending small query is shown there reaching the limit of 80K rps, inserts can reach 31K rps and multirow-query/updates can reach ~20K rps.
The repository is here if you want to run the benchmark on your own machine.
No comments:
Post a Comment
THINK: is it True? is it Helpful? is it Inspiring? is it Necessary? is it Kind?