Showing posts with label command. Show all posts
Showing posts with label command. Show all posts

2021-08-06

Database Patterns of Microservices

When you need microservice? When you have multiple business domain (not domain of DNS), that are best to be splitted, managed, and deployed separately. If your business domain so small and the team is small, it's better to use modular monolith instead, since using Microservice adds a lot of operational complexity (especially if you are using Kubernetes).

These are database patterns I got from Kindson presentation.

  1. Private database per service
    this is the most common pattern, every domain/service must have their own database, this has some benefit:
    + the developers won't be tempted to join across domains that could make the codebase hard to refactor if someday need to be splitted to microservice/modular approach
    + easier for new developer that joining the team, since you don't need to know whole ER diagram, just a small segment that related to the service he/she managed
    - more complicated for analytics use case because you can't do JOIN, but this can be solved using distributed sql query engine like Trino
    + each database can scale and migrate independently (no downtimes especially when you are using database that require locking on migration like MySQL)
    - this causes another problem for accessing different domain that could be solved by:
       * api gateway (sync): service must hit other service thru API gateway
       * event hub/pubsub (async/push): service must subscribe other services' event to retrieve the data, which causes another consistency-related problem
       * service mesh (sync): service must hit other service thru sidecar
       * directly read a replica (async/pull)
  2. Shared database 
    all or some microservice accessing the same database, this have some pros and cons:
    + simpler for the developers, since they can do JOIN and transaction
    - worst kind of performance when the bottleneck is the database especially on migration and scaling
    + no consistency problem
  3. SAGA (sequence of local transaction)
    this have pros and cons:
    + split an atomic transaction into multiple steps that when one of the steps are failed, must reconcile/undo/create a compensation action
    - more complex than normal database transaction
  4. API Composition (join inside the API service, the pattern used in Trino)
    this have pros and cons:
    + can do join across services and datasource
    - must hit multiple services (slower than normal join) if the count are large
    - can be bad if the other service calling another service too (cascading N+1 queries), eg. A hit B, B hit C, but this can be solved if B and C have batch APIs (that usually using WHERE IN, instead of single API)
  5. CQRS (Command Query Responsibility Segregation)
    a pattern that created because old databases usually single master, multiple slave, but this also have a good benefit and cons:
    + simpler scaling, either need to scale the write or just scale the read
    - possible inconsistency problem if not reading from the master for transaction which adds complexity on development (which one must read the master, which one can read the readonly replica)
  6. Domain Events 
    service must publish events, have pros and cons:
    + decoupling, no more service mesh/hitting other services, we just need to subscribe the events
    - eventual consistency
    - must store and publish events that probably never need to be consumed, but we can use directly read event database of that service to overcome this, but this can be also a benefit since events helps the auditing
  7. Event Sourcing
    this pattern create a snapshot to reconstruct final state from series of events, have pros and cons:
    + can be used for reliably publish an event when state changed
    + good for auditing
    + theoritically easier to track when business logic changed (but we must build the full DFA/NFA state graph to reliably consider the edge cases)
    - difficult to query since it's series of events, unless you are prioritizing the snapshot

Engineering is about making decision and prioritization, which simplicity that needs to be prioritized, either maintainability, raw performance, ease of scaling, or other metrics, you'll need to define your own "best". And there's no silver bullet, each solution are best only for specific use case. 

But.. if you have to create a ultimate/general purpose service which patter you would use?
Based on experience, I'll use:
  1. CQRS (1 writer, 1-N reader)
    reader must cache the reads
    writer must log each changes (Domain Events)
    writer can be sync or async (for slow computation, or those that depends on another service/SAGA) and update snapshot each time
  2. Domain Events
    this logs of events can be tailed (async/pull) from another service if they need it
    they must record their own bookmark (of which events already tailed/consumed/ack'ed, which hasn't)
    EDIT 2021-08-16: this guy has the same idea, except that I believe there's should be no circular dependency (eg, his order-inventory system should be composed from order, inventory, and delivery)
  3. API Composition
    but we can use JOIN inside the domain
    especially for statistics, we must collect each services statistics for example
    API have 2 version: consistent version (read from master), eventual consistency version (read from readonly replica)
    API must have batch/paged version other than standard CRUD
    API must tell whether that API depends on another service's APIs
  4. Private database is a must
    since bottleneck are mostly always the database, it'll be better to split databases by domains from beginning (but no need to create a readonly replicata until the read part became the bottleneck)
    If the write more than 200K-600K rps i prefer manual partitioning instead of sharding (unless the database I use support sharding, automatic rebalancing, super-easy to add a new node with Tarantool-like performance)
    What if you need joins for analytics reasons? You can use Trino/Presto/BigQuery/etc, or just delegate the statistics responsibility to each services then collect/aggregate from statistics/collector service.