Showing posts with label hdd. Show all posts
Showing posts with label hdd. Show all posts

2022-12-24

CockroachDB Benchmark on Different Disk Types

Today we're going to benchmark CockroachDB one of database that I use this year to create embedded application. I use CockroachDB because I don't want to use SqLite or any other embedded database that lack of tooling or cannot be accessed by multiple program at the same time. With CockroachDB I only need to distribute my application binary, cockroachdb binary, and that's it, the offline backup also quite simple, just need to rsync the directory, or do manual rows export like other PostgreSQL-like database. Scaling out also quite simple.

Here's the result:

Disk Type Ins Dur (s) Upd Dur (s) Sel Dur (s) Many Dur (s) Insert Q/s Update Q/s Select1 Q/s SelMany Row/s SelMany Q/s
TMPFS (RAM) 1.3 2.1 4.9 1.5 31419 19275 81274 8194872 20487
NVME DA 1TB 2.7 3.7 5.0 1.5 15072 10698 80558 8019435 20048
NVMe Team 1TB 3.8 3.7 4.9 1.5 10569 10678 81820 8209889 20524
SSD GALAX 250GB 8.0 7.1 5.0 1.5 4980 5655 79877 7926162 19815
HDD WD 8TB 32.1 31.7 4.9 3.9 1244 1262 81561 3075780 7689

From the table we can see that TMPFS (RAM, obviously) is the fastest in all case especially insert and update benchmark, NVMe faster than SSD, and standard magnetic HDD is the slowest. but the query-part doesn't really have much effect probably because the dataset too small that all can fit in the cache.

The test done with 100 goroutines, 400 records insert/update per goroutines, the record is only integer and string. Queries done 10x for select, and 300x for select-many, sending small query is shown there reaching the limit  of 80K rps, inserts can reach 31K rps and multirow-query/updates can reach ~20K rps.

The repository is here if you want to run the benchmark on your own machine.

2021-05-27

Benchmarking Disks on VPS: SSD, NVMe or HDD

Are you sure you are getting SSD when you rent a VPS? Here's how to make sure

sudo apt install fio
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 ; rm random_read_write.fio

The result would be something like this:

SSD:

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.16
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=83.7MiB/s,w=27.3MiB/s][r=21.4k,w=6993 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1436574: Thu May 27 19:01:27 2021
  read: IOPS=18.2k, BW=70.0MiB/s (74.4MB/s)(3070MiB/43241msec)
   bw (  KiB/s): min=  456, max=130000, per=99.85%, avg=72589.30, stdev=36572.29, samples=86
   iops        : min=  114, max=32500, avg=18147.33, stdev=9143.07, samples=86
  write: IOPS=6074, BW=23.7MiB/s (24.9MB/s)(1026MiB/43241msec); 0 zone resets
   bw (  KiB/s): min=  176, max=42936, per=99.85%, avg=24259.07, stdev=12200.49, samples=86
   iops        : min=   44, max=10734, avg=6064.77, stdev=3050.12, samples=86
  cpu          : usr=3.35%, sys=13.20%, ctx=781969, majf=0, minf=10
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=70.0MiB/s (74.4MB/s), 70.0MiB/s-70.0MiB/s (74.4MB/s-74.4MB/s), io=3070MiB (3219MB), run=43241-43241msec
  WRITE: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=1026MiB (1076MB), run=43241-43241msec

Disk stats (read/write):
  sdh: ios=782456/263158, merge=1293/2586, ticks=1838928/822350, in_queue=2134502, util=99.12%

HDD:

too slow (1MB/s would took about 40 minutes)

VPS:

test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.16
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=11.2MiB/s,w=3924KiB/s][r=2873,w=981 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3829977: Thu May 27 20:04:10 2021
  read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(3070MiB/259388msec)
   bw (  KiB/s): min= 7744, max=144776, per=99.95%, avg=12112.97, stdev=7631.07, samples=518
   iops        : min= 1936, max=36194, avg=3028.16, stdev=1907.77, samples=518
  write: IOPS=1012, BW=4050KiB/s (4148kB/s)(1026MiB/259388msec); 0 zone resets
   bw (  KiB/s): min= 2844, max=47936, per=99.94%, avg=4047.41, stdev=2504.77, samples=518
   iops        : min=  711, max=11984, avg=1011.83, stdev=626.19, samples=518
  cpu          : usr=2.89%, sys=10.00%, ctx=605914, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=11.8MiB/s (12.4MB/s), 11.8MiB/s-11.8MiB/s (12.4MB/s-12.4MB/s), io=3070MiB (3219MB), run=259388-259388msec
  WRITE: bw=4050KiB/s (4148kB/s), 4050KiB/s-4050KiB/s (4148kB/s-4148kB/s), io=1026MiB (1076MB), run=259388-259388msec

Disk stats (read/write):
  sda: ios=785732/271244, merge=0/973, ticks=795112/15862499, in_queue=15747316, util=99.75
 

 Conclusion

If your VPS speed more than HDD on table below, it's a big possibility that it's using SSD, or at least RAID. But there's another possibility that they are throttling shared VPS so it wouldn't distrub other people's QoS.

 Update

comparison with local NVMe it would be something like this:

   read: IOPS=61.6k, BW=241MiB/s (252MB/s)(3070MiB/12752msec)
  write: IOPS=20.6k, BW=80.5MiB/s (84.4MB/s)(1026MiB/12752msec); 0 zone resets
   read: IOPS=40.2k, BW=157MiB/s (165MB/s)(3070MiB/19546msec)
  write: IOPS=13.4k, BW=52.5MiB/s (55.0MB/s)(1026MiB/19546msec); 0 zone resets

comparison with local HDD with bad sectors it would be something like this:
 
   read: IOPS=217, BW=868KiB/s (889kB/s)(23.6MiB/27852msec)
  write: IOPS=74, BW=296KiB/s (304kB/s)(8256KiB/27852msec); 0 zone resets
 
comparison with vps that claimed to be SSD it would be something like this:
 
   read: IOPS=2908, BW=11.4MiB/s (11.9MB/s)(289MiB/25446msec)
  write: IOPS=965, BW=3861KiB/s (3954kB/s)(95.9MiB/25446msec); 0 zone resets

   read: IOPS=3182, BW=12.4MiB/s (13.0MB/s)(1728MiB/139002msec)
  write: IOPS=1066, BW=4268KiB/s (4370kB/s)(579MiB/139002msec); 0 zone resets