2022-04-05

Start/restart Golang or any other binary program automatically on boot/crash

There are some alternative to make program start on boot on Linux, the usual way is using:

1. SystemD, it could ensure that dependency started before your service, also could limit your CPU/RAM usage. Generate a template using this website or use kardianos/service

2. PM2 (requires NodeJS), or PMG

3. docker-compose (requires docker, but you can skip the build part, just copy the binary directly on Dockerfile command (that can be deployed using rsync), just set restart property on docker-compose and it would restart when computer boot),  -- bad part, you cannot limit cpu/ram unless using docker swarm. But you can use docker directly to limit and use --restart flag.

3. lxc/lxd or multipass or other vm/lightweight vm (but still need systemd inside it XD at least it won't ruin your host), you can rsync directly to the container to redeploy, for example using overseer or tableflip, you must add reverse proxy or nat or proper routing/ip forwarding tho if you want it to be accessed from outside

4. supervisord (python) or ochinchina/supervisord (golang) tutorial here

5. create one daemon manager with systemd/docker-compose, then spawn the other services using goproc or pioz/god

6. monit it can monitor and ensure a program started/not dead

7. nomad (actually this one is deployment tool), but i can also manage workload

8. kubernetes XD overkill

9. immortal.run a supervisor, this one actually using systemd

10. other containerization/VM workload orchestrator/manager that usually already provided by the hoster/PaaS provider (Amazon ECS/Beanstalk/Fargate, Google AppEngine, Heroku, Jelastic, etc)


This is the systemd script that I usually use (you need to create user named "web" and install "unbuffer"):

$ cat /usr/lib/systemd/system/xxx.service
[Unit]
Description=xxx
After=network-online.target postgresql.service
Wants=network-online.target

[Service]
Type=simple
Restart=on-failure
User=web
Group=users
WorkingDirectory=/home/web/xxx
ExecStart=/home/web/xxx/run_production.sh
ExecStop=/usr/bin/killall xxx
LimitNOFILE=2097152
LimitNPROC=65536
ProtectSystem=full
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

$ cat /home/web/xxx/run_production.sh
#!/usr/bin/env bash

mkdir -p `pwd`/logs
ofile=`pwd`/logs/access_`date +%F_%H%M%S`.log
echo Logging into: $ofile
unbuffer time ./xxx | tee $ofile



2022-04-04

Automatic Load Balancer Registration/Deregistration with NATS or FabioLB

Today we're gonna test 2 alternative for automatic load balancing (previously I always use Caddy or NginX (because most of my projects is single server -- the bottleneck is always the database not the backend/compute part), with manual reverse proxy configuration, but today we're gonna test 2 possible way to high-availability load balance strategy (without kubernetes of course), first is using NATS, second one is using standard load balancer, in this case FabioLB.

To use NATS, we're gonna use this strategy:
first one we deploy is the our custom reverse proxy, that should able to convert any query string, form body with any kind of content-type, and any header if needed, we can use any serialization format (json, msgpack, protobuf, etc), but in this case we're just gonna use normal string, we call this service "apiproxy". The apiproxy will send the serialized payload (from map/object) into NATS using request-reply mechanism. Another service is our backend "worker"/handler, that could be anything, but in this case is our real handler that would contain our business logic, so we need to subscribe and return a reply to the apiproxy and it would be deserialized back to the client with any serizaliation format and protocol (gRPC/Websocket/HTTP-REST/JSONP/etc). Here's the benchmark result of normal Fiber without any proxy, apiproxy-nats-worker with single nats vs multi nats instance

# no proxy
go run main.go apiserver
hey -n 1000000 -c 255 http://127.0.0.1:3000
  Average:      0.0011 secs
  Requests/sec: 232449.1716

# single nats
go run main.go apiproxy
go run main.go # worker
hey -n 1000000 -c 255 http://127.0.0.1:3000
  Average:      0.0025 secs
  Requests/sec: 100461.5866

# 2 worker
  Average:      0.0033 secs
  Requests/sec: 76130.4079

# 4 worker
  Average:      0.0051 secs
  Requests/sec: 50140.6288

# limit the apiserver CPU
GOMAXPROCS=2 go run main.go apiserver
  Average:      0.0014 secs
  Requests/sec: 184234.0106

# apiproxy 2 core
# 1 worker 2 core each
  Average:      0.0025 secs
  Requests/sec: 103007.4516

# 2 worker 2 core each
  Average:      0.0029 secs
  Requests/sec: 87522.6801

# 4 worker 2 core each
  Average:      0.0037 secs
  Requests/sec: 67714.5851

# seems that the bottleneck is spawning the producer's NATS
# spawning 8 connections using round-robin

# 1 worker 2 core each
  Average:      0.0021 secs
  Requests/sec: 121883.4324

# 4 worker 2 core each
  Average:      0.0030 secs
  Requests/sec: 84289.4330

# seems also the apiproxy is hogging all the CPU cores
# limiting to 8 core for apiproxy
# now synchronous handler changed into async/callback version
GOMAXPROCS=8 go run main.go apiserver

# 1 worker 2 core each
  Average:      0.0017 secs
  Requests/sec: 148298.8623

# 2 worker 2 core each
  Average:      0.0017 secs
  Requests/sec: 143958.4056

# 4 worker 2 core each
  Average:      0.0029 secs
  Requests/sec: 88447.5352

# limiting the NATS to 4 core using go run on the source
# 1 worker 2 core each
  Average:      0.0013 secs
  Requests/sec: 194787.6327

# 2 worker 2 core each
  Average:      0.0014 secs
  Requests/sec: 176702.0119

# 4 worker 2 core each
  Average:      0.0022 secs
  Requests/sec: 116926.5218

# same nats core count, increase worker core count
# 1 worker 4 core each
  Average:      0.0013 secs
  Requests/sec: 196075.4366

# 2 worker 4 core each
  Average:      0.0014 secs
  Requests/sec: 174912.7629

# 4 worker 4 core each
  Average:      0.0021 secs
  Requests/sec: 121911.4473 --> see update below


Could be better if it was tested in multiple server, but it seems the bottleneck is on NATS connection when have many subscriber, they could not scale linearly (16-66% overhead for a single API proxy) IT's A BUG ON MY SIDE, SEE UPDATE BELOW. Next we're gonna try FabioLB with Consul, Consul used for service registry (it's a synchronous-consistent "database" like Zookeeper or Etcd). To install all of it use this commands:

# setup:
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt install consul
go install github.com/fabiolb/fabio@latest

# start:
sudo consul agent -dev --data-dir=/tmp/consul
fabio
go run main.go -addr 172.17.0.1:5000 -name svc-a -prefix /foo -consul 127.0.0.1:8500

# benchmark:
# without fabio
  Average:      0.0013 secs
  Requests/sec: 197047.9124

# with fabio 1 backend
  Average:      0.0038 secs
  Requests/sec: 65764.9021

# with fabio 2 backend
go run main.go -addr 172.17.0.1:5001 -name svc-a -prefix /foo -consul 127.0.0.1:8500

# the bottleneck might be the cores, so we limit the cores to 2 for each worker
# with fabio 1 backend 2 core each
  Average:      0.0045 secs
  Requests/sec: 56339.5518

# with fabio 2 backend 2 core each
  Average:      0.0042 secs
  Requests/sec: 60296.9714

# what if we limit also the fabio
GOMAXPROCS=8 fabio

# with fabio 8 core, 1 backend 2 core each
  Average:      0.0042 secs
  Requests/sec: 59969.5206

# with fabio 8 core, 2 backend 2 core each
  Average:      0.0041 secs
  Requests/sec: 62169.2256

# with fabio 8 core, 4 backend 2 core each
  Average:      0.0039 secs
  Requests/sec: 64703.8253

All CPU cores utilized around 50% of 32-core server 128GB RAM, can't find which part the bottleneck for now, but for sure both strategy have around 16% vs 67% overhead compared for non proxies (which is make sense because adding more layer will add more transport and more things to copy/transfer and transform/serialize-deserialize). The code used in this benchmark is here, on 2022mid directory, and the code for fabio-consul registration copied from ebay's github repository.

Why even we need to do this? If we're using api gateway pattern (one of the pattern that being used in my past company, but with Kubernetes on worker part), we could deploy independently and communicate between service using the gateway (proxy) without knowing the IP address or domain name of the service itself, as long as it have proper route and payload it can be handled wherever the service being deployed. What if you want to do canary or blue green deployment? you can just register a handler in nats or consul with different route name (especially for communication between services, not public to service), and wait for all traffic to be moved there before killing previous deployment.

So what should you choose? both strategy requires 3 moving part (apiproxy-nats-worker, fabio-consul-worker) but NATS strategy simpler in the development and can give better performance (especially if you make the apiproxy to be as flexible as possible), but it needs to have better serialization, since in this benchmark the serialization not measured, if you need better performance on serialization you must use codegen, which may require you to deploy 2 times (one for apiproxy, one for worker, unless you split the raw response meta with jsonparser or use map only for apiproxy). FabioLB strategy have more features, also you can use consul for service discovery (contacting other services directly by name without have to go thru FabioLB). NATS strategy have some benefit in terms of security, which is the NATS cluster can be inside DMZ, and worker can be on the different subnet without ability to connect each other and it would still works, where if you use consul to connect directly to another service, they should have route or connection to access each other. The bad part about NATS is that you should not use it for file upload, or it would hogging a lot of resource, so it should handled by apiproxy directly, then the reference of the uploaded file should be forwarded as payload to NATS. You can check NATS traffic statistics using nats-top.

What's next? Maybe we can try traefik, which is a service registry combined with load balancer in one binary, it can also use consul.

UPDATE: by changing the code from Subscribe (broadcast/fan-out) to QueueSubscribe (load balance), it have similar performance on 1/2/4 subscribers, so we can NATS for high availability/fault tolerance in api gateway pattern with cost of 16% overhead.

TL:DR

no LB: 232K rps
-> LB with NATS request-reply: 196K rps (16% overhead)
no LB: 197K rps
-> LB with Fabio+Consul: 65K rps (67% overhead)

 



2022-04-01

Georeplicable Architecture Tech Stacks

Since I got a use case for geo-replicable (multiple datacenters around the world), I need to create metrics and billing and log aggregator that survives even when multiple datacenter is down. So we need to create an architecture that the each cluster of databases in datacenter can give correct answer on their own, the layers are:

1. Cache Layer

2. Source of Immediate Truth (OLTP) Layer

3. Source of Event Truth Layer (either CDC or subscribable events or both)

4. Metrics and Analytics (OLAP) Layer

For single datacenter I might choose Tarantool or Aerospike (too bad if you want to store more than your RAM you must use paid version) or Redis (master-slave) for cache layer, especially if the DNS is smart enough to route the client's request to nearest datacenter.

Tarantool can also be source of immediate truth, but the problem is it should be master-slave replication so it could be fault tolerant, but it seems there might have to do manual intervention to fallback to promote slave node as master and reroute other slave to recognize new master?

So the other choice is either TiDB (for consistent use case, lower write rate, more complex query use case) or ScyllaDB (for partition tolerant use case, higher write rate). Both are good in terms of availability. TiDB's TiFlash also good for analytics use case.

For source of event truth, we can use RedPanda for pubsub/MQ use cases, and Debezium (that requires Kafka/RedPanda) for change data capture from ScyllaDB, or TiCDC for TiDB to stream to RedPanda.

Lastly for analytics, we can use Clickhouse, also to store all structured logs that can be easily queried, or can also be Loki. For metrics might aggregate from RedPanda using MaterializeIO (too bad that cluster is paid).

So.. what are the combinations possible?

1. Tarantool (manual counter for metrics) + Clickhouse (manual publish logs and analytics), this one good only for single location/datacenter, unless the clients are all can hit proper server location (like game servers, or smart CDNs)

2. same as #1 but with RedPanda if have multiple service, all logs and events published thru RedPanda

3. Aerospike/Tarantool/Redis (manual counter for metrics) + TiDB + TiFlash (analytics) + Loki for logs

4. Aerospike/Tarantool/Redis (manual counter for metrics) + TIDB + TiCDC + ClickHouse (for logs and analytics)

5. Aerospike/Tarantool/Redis (cache only) + TiDB + TiFlash (analytics) + TiCDC + MaterializeIO (for metrics) + Loki (logs)

6. Aerospike/Tarantool/Redis (cache only) + TiDB + TiCDC + Clickhouse (analytics and logs) + MaterializeIO (for metrics)

7. Aerospike/Tarantool/Redis (cache only) + ScyllaDB + Debezium + RedPanda + Clickhouse (analytics and logs) + MaterializeIO (for metrics) 

for number #5-#7 you can remove the Aerospike/Tarantool part if no need to cache (consistency matters, and you have a very large cluster that can handle peaks).

wait, why don't you include full-text search use case? '__') ok, we can use Debezium that publish to ElasticSearch (might be overkill), or manual publish to TypeSense or MeiliSearch.

That's only for data part of the cluster, what about computation and presentation layer (backend/frontend)?

Backend of course I will use Golang or C#, frontend either Svelte (web only) or Flutter (mobile and web), Unity3D (game).

For object storage (if locally I would use MinIO) and CDN (CloudFlare for example), you can see my previous post.

So why you choose those?

1. Tarantool, one of the fastest in-mem, like 200K wps/qps but the cons I already mentioned above, the performance similar to Redis, but this one supports SQL.

2. Aerospike, also one of the fastest in-mem, like also 200K wps/qps last time I checked, also can do master-master replication, if I'm not mistaken, limited to 4 nodes for free verrsion, can set the replication factor, but see other cons I mentioned above.

3. TiDB, one newsql with automatic rebalance (one node died, and it would still works fast), can do around 30-50K single node last time I benchmarked, their benchmark mostly shows 30K-40K rps for writes, 120K rps for mixed read-write, 330K rps for read-only multi nodes benchmark, need more space? add more TiKV instance, need more compute? add more TiDB instance, need to do faster analytics queries? add TiFlash instance. But.. the bad part is, so many moving parts, you have to deploy TiDB (query engine), TiKV (storage), and PD (placement driver), also TiCDC and TiFlash if you need those too, also I'm not sure how it would perform at multi-DC use-cases. What's the managed alternative? AlloyDB-PostgreSQL (GCP) or TiDB-Cloud (AWS)

4. ScyllaDB, is faster version of Cassandra, most of benchmark can do 120K-400K insert and queries per second, one of the best database for mutli-DC, each keyspace can be set whether should replicate in which datacenter, how many the replication factor, consistency controlled in client-side, also can manage multiple view since 2 years ago (materialized view), so we don't have to create and maintain multiple table manually for each query pattern. What's the managed alternative? ScyllaDB-Cloud

5. RedPanda, is faster version of Kafka, last time I checked, one instance can receive 800K events per seconds, and publishes 4 millions events per second.

6. Clickhouse, is one of the best analytics database, can do a 1M batched ingestion per second in single node, can do complex queries really fast under a second (but depends on how big your data, but kind of query that if you did it in normal RDBMS would took minutes), the cons is one node can only handle 100 queries concurrently.

7. MaterializeIO, is like ksqldb but written in Rust, haven't checked the performance but they claimed they perform as fast as Redis.

What's other alternative? Yugabyte looks ok, especially the YCQL part that works like Cassandra/ScyllaDB. Especially yugabyte seems to be combining Redis, Postgres, and Cassandra in one deployment, but I like TiDB more because last time I checked, I need to config something to make it writable when one node died.

Proxy and load balancer? Caddy, Traefik, FabioLB (with Consul), or NATS (eg. 3 load balancer/custom api gateway that deployed in front, then it would serialize request to NATS inside DMZ, the worker/handler will receive that and return response that deserialized back to the loadbalancer/api gateway, that way load balancer doesn't need to know how exactly how many worker/handler, services can also communicate synchronously thru NATS without knowing each other service's IP address, and the worker/handler part can be scaled independently)

Deployment? Nomad, just rsync it to Jelastic, DockerCompose for dependencies (don't forget to bind the volume or your data will gone), Kubernetes

Tracker? maybe Pirsch

Why there's no __INSERT_DB_TYPE_HERE__? Graph database? because I rarely do recursive/graph queries, and especially I don't know which one that are best? Neo4J? DGraph? NebulaGraph? TigerGraph? AnzoGraph? TerminusDB? Age? JanusGraph? HugeGraph?

Have some other cool/high-performance tech stack suggestion? Chat with me at http://t.me/kokizzu (give proper intro tho, or I would think you are a crypto scam spammer XD)