MemSQL 6.7.16
10000 times - Insert
raw: 1.54s 153927 ns/op 592 B/op 15 allocs/op
orm: 1.60s 160195 ns/op 1465 B/op 39 allocs/op
qbs: 1.73s 172760 ns/op 4595 B/op 107 allocs/op
modl: 2.26s 225537 ns/op 1352 B/op 31 allocs/op
gorp: 2.38s 238256 ns/op 1424 B/op 32 allocs/op
xorm: 2.44s 243955 ns/op 2594 B/op 69 allocs/op
hood: 2.75s 275120 ns/op 10812 B/op 161 allocs/op
upper.io: 2.99s 299289 ns/op 11829 B/op 644 allocs/op
gorm: 4.07s 407045 ns/op 7716 B/op 151 allocs/op
2500 times - MultiInsert 100 row
orm: 3.23s 1290337 ns/op 136250 B/op 1537 allocs/op
raw: 3.42s 1366141 ns/op 140920 B/op 817 allocs/op
xorm: 5.11s 2044104 ns/op 267877 B/op 4671 allocs/op
hood: Not support multi insert
modl: Not support multi insert
qbs: Not support multi insert
upper.io: Not support multi insert
gorp: Not support multi insert
gorm: Not support multi insert
10000 times - Update
raw: 1.43s 142733 ns/op 656 B/op 17 allocs/op
orm: 1.50s 150464 ns/op 1424 B/op 40 allocs/op
qbs: 1.71s 170803 ns/op 4594 B/op 107 allocs/op
modl: 2.11s 211200 ns/op 1528 B/op 39 allocs/op
gorp: 2.14s 213744 ns/op 1576 B/op 38 allocs/op
hood: 2.65s 265383 ns/op 10812 B/op 161 allocs/op
xorm: 3.01s 300524 ns/op 2697 B/op 103 allocs/op
gorm: 8.41s 841018 ns/op 18677 B/op 389 allocs/op
upper.io: 0.00s 0.03 ns/op 0 B/op 0 allocs/op
20000 times - Read
raw: 3.45s 172556 ns/op 1472 B/op 40 allocs/op
orm: 3.81s 190347 ns/op 2649 B/op 96 allocs/op
modl: 6.61s 330343 ns/op 1912 B/op 48 allocs/op
gorp: 6.85s 342620 ns/op 1912 B/op 55 allocs/op
hood: 7.02s 350974 ns/op 4098 B/op 54 allocs/op
qbs: 7.46s 373004 ns/op 6574 B/op 175 allocs/op
upper.io: 8.07s 403673 ns/op 10089 B/op 456 allocs/op
gorm: 8.35s 417320 ns/op 12195 B/op 242 allocs/op
xorm: 9.18s 459213 ns/op 9390 B/op 263 allocs/op
10000 times - MultiRead limit 100
raw: 6.42s 642379 ns/op 34746 B/op 1323 allocs/op
modl: 7.59s 759230 ns/op 49902 B/op 1724 allocs/op
gorp: 7.74s 773598 ns/op 63723 B/op 1912 allocs/op
orm: 9.15s 914736 ns/op 85050 B/op 4286 allocs/op
qbs: 10.16s 1016412 ns/op 165861 B/op 6429 allocs/op
upper.io: 10.69s 1068507 ns/op 83801 B/op 2055 allocs/op
hood: 12.43s 1243334 ns/op 136238 B/op 6364 allocs/op
gorm: 16.23s 1622574 ns/op 254781 B/op 6229 allocs/op
xorm: 17.39s 1738862 ns/op 180066 B/op 8093 allocs/op
The upper.io/db can't generate more efficient query than current implementation: (SET id = ? WHERE id = ?), so the update will fail on MemSQL.
Raw still the best, and Beego's built-in orm quite good except for multiread part.Compared to MySQL with 5 iteration:
MySQL 5.7.28
10000 times - Insert
raw: 81.79s 8179045 ns/op 592 B/op 15 allocs/op
qbs: 86.66s 8666472 ns/op 4595 B/op 107 allocs/op
gorp: 88.69s 8868999 ns/op 1424 B/op 32 allocs/op
orm: 90.29s 9028890 ns/op 1464 B/op 39 allocs/op
hood: 91.96s 9196392 ns/op 10814 B/op 161 allocs/op
gorm: 93.31s 9331332 ns/op 7718 B/op 151 allocs/op
modl: 93.63s 9362930 ns/op 1352 B/op 31 allocs/op
upper.io: 95.56s 9556491 ns/op 11830 B/op 644 allocs/op
xorm: 96.82s 9682337 ns/op 2594 B/op 69 allocs/op
2500 times - MultiInsert 100 row
raw: 32.92s 13167271 ns/op 140922 B/op 818 allocs/op
orm: 35.29s 14117094 ns/op 136296 B/op 1537 allocs/op
xorm: 39.70s 15879522 ns/op 267943 B/op 4671 allocs/op
qbs: Not support multi insert
hood: Not support multi insert
modl: Not support multi insert
gorm: Not support multi insert
upper.io: Not support multi insert
gorp: Not support multi insert
10000 times - Update
upper.io: 3.08s 307724 ns/op 17735 B/op 951 allocs/op
qbs: 87.03s 8703447 ns/op 4594 B/op 107 allocs/op
gorp: 87.76s 8776111 ns/op 1576 B/op 38 allocs/op
hood: 90.29s 9028560 ns/op 10813 B/op 161 allocs/op
raw: 91.07s 9107205 ns/op 656 B/op 17 allocs/op
modl: 92.25s 9225025 ns/op 1528 B/op 39 allocs/op
xorm: 96.47s 9646503 ns/op 2697 B/op 103 allocs/op
gorm: 96.90s 9690444 ns/op 18676 B/op 389 allocs/op
orm: 99.90s 9989899 ns/op 1424 B/op 40 allocs/op
20000 times - Read
raw: 1.70s 84844 ns/op 1472 B/op 40 allocs/op
orm: 1.91s 95393 ns/op 2649 B/op 96 allocs/op
qbs: 1.92s 96013 ns/op 6576 B/op 175 allocs/op
hood: 2.89s 144473 ns/op 4097 B/op 54 allocs/op
gorp: 2.95s 147612 ns/op 1912 B/op 55 allocs/op
modl: 2.99s 149255 ns/op 1912 B/op 48 allocs/op
upper.io: 4.33s 216621 ns/op 10089 B/op 456 allocs/op
gorm: 4.35s 217446 ns/op 12195 B/op 242 allocs/op
xorm: 4.68s 234212 ns/op 9392 B/op 263 allocs/op
10000 times - MultiRead limit 100
raw: 3.48s 348355 ns/op 34744 B/op 1323 allocs/op
modl: 4.56s 455775 ns/op 49904 B/op 1724 allocs/op
gorp: 4.94s 494206 ns/op 63725 B/op 1912 allocs/op
orm: 5.97s 597024 ns/op 85060 B/op 4286 allocs/op
upper.io: 6.64s 664491 ns/op 83803 B/op 2055 allocs/op
qbs: 7.29s 729417 ns/op 165864 B/op 6429 allocs/op
hood: 8.32s 831645 ns/op 136237 B/op 6364 allocs/op
gorm: 11.53s 1152701 ns/op 254774 B/op 6228 allocs/op
xorm: 12.97s 1296585 ns/op 180067 B/op 8093 allocs/op
The overhead for mysql is not significant. Note that the insert and update is slow because the transaction isolation is not set to read committed.
MemSQL 7.0.9
10000 times - Insert
raw: 1.58s 158308 ns/op 592 B/op 15 allocs/op
orm: 1.67s 166718 ns/op 1464 B/op 39 allocs/op
qbs: 1.87s 186627 ns/op 4595 B/op 107 allocs/op
modl: 2.29s 228827 ns/op 1352 B/op 31 allocs/op
gorp: 2.45s 244721 ns/op 1424 B/op 32 allocs/op
xorm: 2.56s 255536 ns/op 2595 B/op 69 allocs/op
hood: 2.72s 271565 ns/op 10814 B/op 161 allocs/op
upper.io: 3.00s 300482 ns/op 11828 B/op 644 allocs/op
gorm: 4.15s 414676 ns/op 7717 B/op 151 allocs/op
2500 times - MultiInsert 100 row
orm: 3.27s 1306549 ns/op 136254 B/op 1537 allocs/op
raw: 3.31s 1324971 ns/op 140920 B/op 817 allocs/op
xorm: 5.19s 2077746 ns/op 267822 B/op 4671 allocs/op
modl: Not support multi insert
gorm: Not support multi insert
hood: Not support multi insert
upper.io: Not support multi insert
qbs: Not support multi insert
gorp: Not support multi insert
10000 times - Update
raw: 1.57s 156799 ns/op 656 B/op 17 allocs/op
orm: 1.62s 161919 ns/op 1425 B/op 40 allocs/op
qbs: 1.76s 176142 ns/op 4595 B/op 107 allocs/op
modl: 2.23s 222540 ns/op 1528 B/op 39 allocs/op
gorp: 2.29s 228606 ns/op 1576 B/op 38 allocs/op
hood: 2.67s 266824 ns/op 10813 B/op 161 allocs/op
xorm: 3.29s 329236 ns/op 2697 B/op 103 allocs/op
gorm: 8.83s 882594 ns/op 18677 B/op 389 allocs/op
upper.io: 0.00s 0.04 ns/op 0 B/op 0 allocs/op
20000 times - Read
raw: 3.74s 186956 ns/op 1472 B/op 40 allocs/op
orm: 3.88s 194016 ns/op 2649 B/op 96 allocs/op
modl: 6.51s 325522 ns/op 1912 B/op 48 allocs/op
gorp: 6.83s 341292 ns/op 1912 B/op 55 allocs/op
qbs: 7.35s 367283 ns/op 6574 B/op 175 allocs/op
hood: 7.73s 386417 ns/op 4098 B/op 54 allocs/op
upper.io: 8.76s 438185 ns/op 10089 B/op 456 allocs/op
gorm: 9.33s 466715 ns/op 12194 B/op 242 allocs/op
xorm: 9.89s 494368 ns/op 9390 B/op 263 allocs/op
10000 times - MultiRead limit 100
raw: 6.43s 642713 ns/op 34746 B/op 1323 allocs/op
modl: 7.49s 749218 ns/op 49902 B/op 1724 allocs/op
gorp: 7.63s 763255 ns/op 63728 B/op 1912 allocs/op
orm: 8.95s 895022 ns/op 85050 B/op 4286 allocs/op
qbs: 10.23s 1023162 ns/op 165861 B/op 6429 allocs/op
upper.io: 11.28s 1127575 ns/op 83801 B/op 2055 allocs/op
hood: 12.62s 1262190 ns/op 136241 B/op 6364 allocs/op
gorm: 16.65s 1665189 ns/op 254772 B/op 6228 allocs/op
xorm: 17.69s 1768666 ns/op 180053 B/op 8093 allocs/op
There's seems no significant performance difference between MemSQL 6.7 and 7.0 in this case. But what if we put MemSQL inside docker, how much the overhead?
MemSQL 7.0.9 inside docker with NAT
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY -p 3306:3306 -p 8082:8080 memsql/cluster-in-a-box
10000 times - Insert
raw: 2.29s 228825 ns/op 592 B/op 15 allocs/op
orm: 2.39s 238694 ns/op 1465 B/op 39 allocs/op
qbs: 2.58s 258331 ns/op 4595 B/op 107 allocs/op
modl: 3.60s 360296 ns/op 1352 B/op 31 allocs/op
xorm: 3.76s 376043 ns/op 2594 B/op 69 allocs/op
gorp: 3.77s 377271 ns/op 1424 B/op 32 allocs/op
hood: 4.21s 421357 ns/op 10813 B/op 161 allocs/op
upper.io: 4.41s 441370 ns/op 11829 B/op 644 allocs/op
gorm: 6.68s 668315 ns/op 7717 B/op 151 allocs/op
2500 times - MultiInsert 100 row
orm: 4.38s 1750560 ns/op 136321 B/op 1537 allocs/op
raw: 4.73s 1893901 ns/op 140920 B/op 817 allocs/op
xorm: 6.34s 2537707 ns/op 267921 B/op 4671 allocs/op
gorp: Not support multi insert
modl: Not support multi insert
qbs: Not support multi insert
gorm: Not support multi insert
hood: Not support multi insert
upper.io: Not support multi insert
10000 times - Update
raw: 2.28s 228252 ns/op 656 B/op 17 allocs/op
orm: 2.37s 237145 ns/op 1424 B/op 40 allocs/op
qbs: 2.45s 244695 ns/op 4594 B/op 107 allocs/op
modl: 3.52s 351694 ns/op 1528 B/op 39 allocs/op
gorp: 3.55s 354756 ns/op 1576 B/op 38 allocs/op
hood: 4.08s 407915 ns/op 10812 B/op 161 allocs/op
xorm: 4.86s 486246 ns/op 2697 B/op 103 allocs/op
gorm: 13.55s 1354767 ns/op 18679 B/op 389 allocs/op
upper.io: 0.00s 0.03 ns/op 0 B/op 0 allocs/op
20000 times - Read
raw: 5.37s 268278 ns/op 1472 B/op 40 allocs/op
orm: 5.40s 269883 ns/op 2649 B/op 96 allocs/op
qbs: 10.20s 509797 ns/op 6574 B/op 175 allocs/op
modl: 11.03s 551638 ns/op 1912 B/op 48 allocs/op
gorp: 11.49s 574716 ns/op 1912 B/op 55 allocs/op
hood: 11.76s 587919 ns/op 4097 B/op 54 allocs/op
upper.io: 13.29s 664267 ns/op 10089 B/op 456 allocs/op
gorm: 13.60s 679870 ns/op 12194 B/op 242 allocs/op
xorm: 14.83s 741376 ns/op 9390 B/op 263 allocs/op
10000 times - MultiRead limit 100
raw: 8.34s 833549 ns/op 34747 B/op 1323 allocs/op
modl: 9.73s 972505 ns/op 49902 B/op 1724 allocs/op
gorp: 9.95s 994607 ns/op 63725 B/op 1912 allocs/op
orm: 11.24s 1123517 ns/op 85058 B/op 4286 allocs/op
qbs: 12.12s 1212164 ns/op 165860 B/op 6429 allocs/op
upper.io: 13.96s 1396187 ns/op 83800 B/op 2055 allocs/op
hood: 16.05s 1604510 ns/op 136241 B/op 6364 allocs/op
gorm: 20.23s 2023026 ns/op 254764 B/op 6228 allocs/op
xorm: 20.45s 2044591 ns/op 180065 B/op 8093 allocs/op
It shown that running MemSQL inside docker has about ~44% performance penalty, it seems to be the NAT bottleneck. Let's try again using host network:
MemSQL 7.0.9 inside docker with host network
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY --net=host memsql/cluster-in-a-box
10000 times - Insert
raw: 1.84s 184249 ns/op 592 B/op 15 allocs/op
orm: 1.98s 197552 ns/op 1465 B/op 39 allocs/op
qbs: 2.02s 201505 ns/op 4595 B/op 107 allocs/op
gorp: 2.62s 262421 ns/op 1424 B/op 32 allocs/op
modl: 2.63s 263243 ns/op 1352 B/op 31 allocs/op
xorm: 2.87s 287027 ns/op 2594 B/op 69 allocs/op
hood: 3.18s 317792 ns/op 10814 B/op 161 allocs/op
upper.io: 3.50s 350001 ns/op 11828 B/op 644 allocs/op
gorm: 4.69s 469475 ns/op 7716 B/op 151 allocs/op
2500 times - MultiInsert 100 row
orm: 4.02s 1606318 ns/op 136207 B/op 1537 allocs/op
raw: 4.26s 1702967 ns/op 140921 B/op 818 allocs/op
xorm: 6.16s 2463782 ns/op 267869 B/op 4671 allocs/op
gorp: Not support multi insert
gorm: Not support multi insert
qbs: Not support multi insert
modl: Not support multi insert
upper.io: Not support multi insert
hood: Not support multi insert
10000 times - Update
raw: 1.85s 184579 ns/op 656 B/op 17 allocs/op
orm: 1.97s 197037 ns/op 1425 B/op 40 allocs/op
qbs: 1.97s 197209 ns/op 4595 B/op 107 allocs/op
modl: 2.60s 259853 ns/op 1528 B/op 39 allocs/op
gorp: 2.61s 260791 ns/op 1576 B/op 38 allocs/op
hood: 3.11s 311218 ns/op 10814 B/op 161 allocs/op
xorm: 3.75s 374953 ns/op 2697 B/op 103 allocs/op
gorm: 10.18s 1017593 ns/op 18676 B/op 389 allocs/op
upper.io: 0.00s 0.04 ns/op 0 B/op 0 allocs/op
20000 times - Read
raw: 4.34s 217164 ns/op 1472 B/op 40 allocs/op
orm: 4.51s 225554 ns/op 2649 B/op 96 allocs/op
gorp: 8.39s 419645 ns/op 1912 B/op 55 allocs/op
qbs: 8.79s 439281 ns/op 6574 B/op 175 allocs/op
hood: 8.97s 448493 ns/op 4098 B/op 54 allocs/op
modl: 9.14s 456942 ns/op 1912 B/op 48 allocs/op
upper.io: 10.57s 528673 ns/op 10089 B/op 456 allocs/op
gorm: 11.07s 553741 ns/op 12194 B/op 242 allocs/op
xorm: 11.93s 596566 ns/op 9391 B/op 263 allocs/op
10000 times - MultiRead limit 100
raw: 7.92s 792363 ns/op 34747 B/op 1323 allocs/op
modl: 9.13s 912642 ns/op 49902 B/op 1724 allocs/op
gorp: 9.35s 934646 ns/op 63722 B/op 1912 allocs/op
orm: 10.35s 1035154 ns/op 85049 B/op 4286 allocs/op
qbs: 11.41s 1141194 ns/op 165860 B/op 6429 allocs/op
upper.io: 13.05s 1304687 ns/op 83800 B/op 2055 allocs/op
hood: 14.64s 1463870 ns/op 136245 B/op 6364 allocs/op
gorm: 18.76s 1876366 ns/op 254767 B/op 6228 allocs/op
xorm: 20.16s 2015870 ns/op 180055 B/op 8093 allocs/op
This version only 14-16% slower than baremetal version. There's also another alternative, using iptables forwarding, :
MemSQL 7.0.9 inside docker with iptables forwarding
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY memsql/cluster-in-a-box
sudo sysctl -w net.ipv4.conf.all.route_localnet=1
GUEST_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' memsql1)
sudo iptables -t nat -A OUTPUT -m addrtype --src-type LOCAL --dst-type LOCAL -p tcp --dport 3306 -j DNAT --to-destination $GUEST_IP
sudo iptables -t nat -A POSTROUTING -m addrtype --src-type LOCAL --dst-type UNICAST -j MASQUERADE
# replace -A with -D to delete the rule after using
10000 times - Insert
raw: 1.94s 193731 ns/op 592 B/op 15 allocs/op
2500 times - MultiInsert 100 row
raw: 4.53s 1813220 ns/op 140922 B/op 818 allocs/op
10000 times - Update
raw: 1.90s 190419 ns/op 656 B/op 17 allocs/op
20000 times - Read
raw: 4.45s 222618 ns/op 1472 B/op 40 allocs/op
10000 times - MultiRead limit 100
raw: 7.64s 764215 ns/op 34746 B/op 1323 allocs/op
Which is 19-23.5% slower. Another alternative is using gost (or another proxy like socat, ncat, goproxy, redir, etc):
MemSQL 7.0.9 inside docker with gost proxy
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY memsql/cluster-in-a-box
gost -L tcp://:3306/$GUEST_IP:3306
10000 times - Insert
raw: 2.32s 231583 ns/op 592 B/op 15 allocs/op
2500 times - MultiInsert 100 row
raw: 4.68s 1870160 ns/op 140921 B/op 818 allocs/op
10000 times - Update
raw: 2.25s 224826 ns/op 656 B/op 17 allocs/op
20000 times - Read
raw: 5.31s 265496 ns/op 1472 B/op 40 allocs/op
10000 times - MultiRead limit 100
raw: 8.10s 809700 ns/op 34747 B/op 1323 allocs/op
Which apparently 41-47% slower, as bad as docker's NAT. Now, what if we access the IP directly?
MemSQL 7.0.9 inside docker direct ip access
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY memsql/cluster-in-a-box
orm-benchmark -multi=5 -orm=raw -source "root:@tcp($GUEST_IP:3306)/orm_bench?charset=utf8"
10000 times - Insert
raw: 1.76s 176259 ns/op 592 B/op 15 allocs/op
2500 times - MultiInsert 100 row
raw: 4.19s 1675736 ns/op 140922 B/op 818 allocs/op
10000 times - Update
raw: 1.72s 171562 ns/op 656 B/op 17 allocs/op
20000 times - Read
raw: 4.06s 202945 ns/op 1472 B/op 40 allocs/op
10000 times - MultiRead limit 100
raw: 7.50s 750091 ns/op 34746 B/op 1323 allocs/op
This aproach only have 8-12% overhead. If I have more free time, I'll benchmark volume binding performance. Or maybe someone else want to contribute adding more ORMs? (see TODO section on github repo)
MemSQL 7.0.9 inside docker with host network
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY --net=host memsql/cluster-in-a-box
10000 times - Insert
raw: 1.84s 184249 ns/op 592 B/op 15 allocs/op
orm: 1.98s 197552 ns/op 1465 B/op 39 allocs/op
qbs: 2.02s 201505 ns/op 4595 B/op 107 allocs/op
gorp: 2.62s 262421 ns/op 1424 B/op 32 allocs/op
modl: 2.63s 263243 ns/op 1352 B/op 31 allocs/op
xorm: 2.87s 287027 ns/op 2594 B/op 69 allocs/op
hood: 3.18s 317792 ns/op 10814 B/op 161 allocs/op
upper.io: 3.50s 350001 ns/op 11828 B/op 644 allocs/op
gorm: 4.69s 469475 ns/op 7716 B/op 151 allocs/op
2500 times - MultiInsert 100 row
orm: 4.02s 1606318 ns/op 136207 B/op 1537 allocs/op
raw: 4.26s 1702967 ns/op 140921 B/op 818 allocs/op
xorm: 6.16s 2463782 ns/op 267869 B/op 4671 allocs/op
gorp: Not support multi insert
gorm: Not support multi insert
qbs: Not support multi insert
modl: Not support multi insert
upper.io: Not support multi insert
hood: Not support multi insert
10000 times - Update
raw: 1.85s 184579 ns/op 656 B/op 17 allocs/op
orm: 1.97s 197037 ns/op 1425 B/op 40 allocs/op
qbs: 1.97s 197209 ns/op 4595 B/op 107 allocs/op
modl: 2.60s 259853 ns/op 1528 B/op 39 allocs/op
gorp: 2.61s 260791 ns/op 1576 B/op 38 allocs/op
hood: 3.11s 311218 ns/op 10814 B/op 161 allocs/op
xorm: 3.75s 374953 ns/op 2697 B/op 103 allocs/op
gorm: 10.18s 1017593 ns/op 18676 B/op 389 allocs/op
upper.io: 0.00s 0.04 ns/op 0 B/op 0 allocs/op
20000 times - Read
raw: 4.34s 217164 ns/op 1472 B/op 40 allocs/op
orm: 4.51s 225554 ns/op 2649 B/op 96 allocs/op
gorp: 8.39s 419645 ns/op 1912 B/op 55 allocs/op
qbs: 8.79s 439281 ns/op 6574 B/op 175 allocs/op
hood: 8.97s 448493 ns/op 4098 B/op 54 allocs/op
modl: 9.14s 456942 ns/op 1912 B/op 48 allocs/op
upper.io: 10.57s 528673 ns/op 10089 B/op 456 allocs/op
gorm: 11.07s 553741 ns/op 12194 B/op 242 allocs/op
xorm: 11.93s 596566 ns/op 9391 B/op 263 allocs/op
10000 times - MultiRead limit 100
raw: 7.92s 792363 ns/op 34747 B/op 1323 allocs/op
modl: 9.13s 912642 ns/op 49902 B/op 1724 allocs/op
gorp: 9.35s 934646 ns/op 63722 B/op 1912 allocs/op
orm: 10.35s 1035154 ns/op 85049 B/op 4286 allocs/op
qbs: 11.41s 1141194 ns/op 165860 B/op 6429 allocs/op
upper.io: 13.05s 1304687 ns/op 83800 B/op 2055 allocs/op
hood: 14.64s 1463870 ns/op 136245 B/op 6364 allocs/op
gorm: 18.76s 1876366 ns/op 254767 B/op 6228 allocs/op
xorm: 20.16s 2015870 ns/op 180055 B/op 8093 allocs/op
This version only 14-16% slower than baremetal version. There's also another alternative, using iptables forwarding, :
MemSQL 7.0.9 inside docker with iptables forwarding
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY memsql/cluster-in-a-box
sudo sysctl -w net.ipv4.conf.all.route_localnet=1
GUEST_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' memsql1)
sudo iptables -t nat -A OUTPUT -m addrtype --src-type LOCAL --dst-type LOCAL -p tcp --dport 3306 -j DNAT --to-destination $GUEST_IP
sudo iptables -t nat -A POSTROUTING -m addrtype --src-type LOCAL --dst-type UNICAST -j MASQUERADE
# replace -A with -D to delete the rule after using
10000 times - Insert
raw: 1.94s 193731 ns/op 592 B/op 15 allocs/op
2500 times - MultiInsert 100 row
raw: 4.53s 1813220 ns/op 140922 B/op 818 allocs/op
10000 times - Update
raw: 1.90s 190419 ns/op 656 B/op 17 allocs/op
20000 times - Read
raw: 4.45s 222618 ns/op 1472 B/op 40 allocs/op
10000 times - MultiRead limit 100
raw: 7.64s 764215 ns/op 34746 B/op 1323 allocs/op
Which is 19-23.5% slower. Another alternative is using gost (or another proxy like socat, ncat, goproxy, redir, etc):
MemSQL 7.0.9 inside docker with gost proxy
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY memsql/cluster-in-a-box
gost -L tcp://:3306/$GUEST_IP:3306
10000 times - Insert
raw: 2.32s 231583 ns/op 592 B/op 15 allocs/op
2500 times - MultiInsert 100 row
raw: 4.68s 1870160 ns/op 140921 B/op 818 allocs/op
10000 times - Update
raw: 2.25s 224826 ns/op 656 B/op 17 allocs/op
20000 times - Read
raw: 5.31s 265496 ns/op 1472 B/op 40 allocs/op
10000 times - MultiRead limit 100
raw: 8.10s 809700 ns/op 34747 B/op 1323 allocs/op
MemSQL 7.0.9 inside docker direct ip access
docker run -i --init --name memsql1 -e LICENSE_KEY=$LICENSE_KEY memsql/cluster-in-a-box
orm-benchmark -multi=5 -orm=raw -source "root:@tcp($GUEST_IP:3306)/orm_bench?charset=utf8"
10000 times - Insert
raw: 1.76s 176259 ns/op 592 B/op 15 allocs/op
2500 times - MultiInsert 100 row
raw: 4.19s 1675736 ns/op 140922 B/op 818 allocs/op
10000 times - Update
raw: 1.72s 171562 ns/op 656 B/op 17 allocs/op
20000 times - Read
raw: 4.06s 202945 ns/op 1472 B/op 40 allocs/op
10000 times - MultiRead limit 100
raw: 7.50s 750091 ns/op 34746 B/op 1323 allocs/op
No comments :
Post a Comment
THINK: is it True? is it Helpful? is it Inspiring? is it Necessary? is it Kind?