Motivation of this article is to promote less painful way of testing, structuring codes, and less broken test when changing logic/implementation details (only changing the logic not changing the input/output). This post recapping past ~4 years compilation of articles that conclude that Fake > Mock, Classical Test > Mock Test from other developers that realize the similar pain points of popular approach (mock).
Mock Approach
Given a code like this:
type Obj struct {
*sql.DB // or Provider
}
func (o *Obj) DoMultipleQuery(in InputStruct) (out OutputStruct, err error) {
... = o.DoSomeQuery()
... = o.DoOtherQuery()
}
I’ve seen code to test with mock technique like this:
func TestObjDoMultipleQuery(t *testing.T) {
o := Obj{mockProvider{}}
testCases := []struct {
name string
mockFunc func(sqlmock.Sqlmock, *gomock.Controller)
in InputStruct
out OutputStruct
shouldErr bool
} {
{
name: `bast case`,
mockFunc: func(db sqlmock.SqlMock, c *gomock.Controller) {
db.ExpectExec(`UPDATE t1 SET bla = \?, foo = \?, yay = \? WHERE bar = \? LIMIT 1`).
WillReturnResult(sqlmock.NewResult(1,1))
db.ExpectQuery(`SELECT a, b, c, d, bar, bla, yay FROM t1 WHERE bar = \? AND state IN \(1,2\)`).
WithArgs(3).
WillReturnRows(sqlmock.NewRows([]string{"id", "channel_name", "display_name", "color", "description", "active", "updated_at"}).
AddRow("2", "bla2", "Bla2", "#0000", "bla bla", "1", "2021-05-18T15:04:05Z").
AddRow("3", "wkwk", "WkWk", "#0000", "wkwk", "1", "2021-05-18T15:04:05Z"))
...
},
in: InputStruct{...},
out: OutputStruct{...},
wantErr: false,
},
{
... other cases
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T){
... // prepare mock object
o := Obj{mockProvider}
out := o.DoMultipleQueryBusinessLogic(tc.in)
assert.Equal(t, out, tc.out)
})
}
}
This approach has pros and cons:
+ could check whether has typos (eg. add one character in the original query, this test would detect the error)
+ could check whether some queries are properly called, or not called but expected to be called
+ unit test would always faster than integration test
- testing implementation detail (easily break when the logic changed)
- cannot check whether the SQL statements are correct
- possible coupled implementation between data provider and business logic
- duplicating work between original query and the regex-version of query, which if add a column, we must change both implementation
For the last cons, we can change it to something like this:
db.ExpectQuery(`SELECT.+FROM t1.+`).
WillReturnRows( ... )
This approach has pros and cons:
+ not deduplicating works (since it just a simplified regex of the full SQL statements
+ still can check whether queries properly called or not
+ unit test would always faster than integration test
- testing implementation detail (easily break when the logic changed)
- cannot detect typos/whether the query no longer match (eg. if we accidentally add one character on the original query that can cause sql error)
- cannot check correctness of the SQL statement
- possible coupled implementation between data provider and business logic
We could also create a helper function to replace the original query to regex version:
func SqlToRegexSql(sql string) string {
return // replace special characters in regex (, ), ?, . with escaped version
}
db.ExpectQuery(SqlToRegexSql(ORIGINAL_QUERY)) ...
This approach has same pros and cons as previous approach.
Fake Approach
Fake testing use classical approach, instead of checking implementation detail (expected calls to dependency), we use a compatible implementation as dependency (eg. a slice/map of struct for database table/DataProvider)
Given a code like this:
type Obj struct {
FooDataProvider // interface{UpdateFoo,GetFoo,...}
}
func (o *Obj) DoBusinessLogic(in *Input) (out *Output,err error) {
... = o.UpdateFoo(in.bla)
... = o.GetFoo(in.bla)
...
}
It’s better to make a fake data provider like this:
type FakeFooDataProvider struct {
Rows map[int]FooRow{} // or slice
}
func (f *FakeFooDataProvider) UpdateFoo(a string) (...) {
/* update Rows */}
func (f *FakeFooDataProvider) GetFoo(a string) (...) {
/* get one Rows */}
... // insert, delete, count, get batched/paged
So in the test, we can do something like this:
func TestObjDoBusinessLogic(t *testing.T) {
o := Obj{FakeFooDataProvider{}}
testCases := []struct{
name string
in Input
out Ouput
shouldErr bool
} {
{
name: `best case`,
in: Input{...},
out: Output{...},
shouldErr: false,
},
{
...
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T){
out := o.DoBusinessLogic(tc.in)
assert.Equal(t, tc.out, out)
})
}
}
This approach have pros and cons:
+ testing behavior (this input should give this output) instead of implementation detail (not easily break/no need to modify the test when algorithm/logic changed)
+ unit test would always faster than integration test
- cannot check whether the queries are called or not called but expected to be called
- double work in Golang (since there’s no generic/template yet, go 1.18 must wait Feb 2022), must create minimal fake implementation (map/slice) that simulate basic database table logic, or if data provider not separated between tables (repository/entity pattern) must create a join logic too – better approach in this case is to always create Insert, Update, Delete, GetOne, GetBatch instead of joining.
+ should be no coupling between queries and business logic
Cannot check whether queries in data provider are correct (which should not be the problem of this unit, it should be DataProvider integration/unit test’s problem, not this unit)
Classical Approach for DataProvider
It’s better to test the queries using classical (black box) approach integration test instead of mock (white box), since mock and fake testing can only test the correctness of business logic, not logic of the data provider that mostly depend to a 2nd party (database). Fake testing also considered a classical approach, since it test input/output not implementation detail.
Using dockertest when test on local and gitlab-ci service when test on pipeline, can be something like this:
var testDbConn *sql.DB
func TestMain(m *testing.M) int { // called before test
if env == `` || env == `development` {
// spawn dockertest, return connection to dockertest
prepareDb(func(db *sql.DB){
testDbConn = db
if db == nil {
return 0
}
return m.Run()
})
} else {
// connect to gitlab-ci service
var err error
testDbConn, err = ...
// log error
}
}
func TestDataProviderLogic(t *testing.T) {
if testDbConn == nil {
if env == `` || env == `development` || env == `test` {
t.Fail()
}
return
}
f := FooDataProvider{testDbConn}
f.InitTables()
f.MigrateTables() // if testing migration
// test f.UpdateFoo, f.GetFoo, ...
}
Where the prepareDb function can be something like this (taken from dockertest example):
func prepareDb(onReady func(db *sql.DB) int) {
const dockerRepo = `yandex/clickhouse-server`
const dockerVer = `latest`
const chPort = `9000/tcp`
const dbDriver = "clickhouse"
const dbConnStr = "tcp://127.0.0.1:%s?debug=true"
var err error
if globalPool == nil {
globalPool, err = dockertest.NewPool("")
if err != nil {
log.Printf("Could not connect to docker: %s\n", err)
return
}
}
resource, err := globalPool.Run(dockerRepo, dockerVer, []string{})
if err != nil {
log.Printf("Could not start resource: %s\n", err)
return
}
var db *sql.DB
if err := globalPool.Retry(func() error {
var err error
db, err = sql.Open(dbDriver,
fmt.Sprintf(dbConnStr, resource.GetPort(chPort)))
if err != nil {
return err
}
return db.Ping()
}); err != nil {
log.Printf("Could not connect to docker: %s\n", err)
return
}
code := onReady(db)
if err := globalPool.Purge(resource); err != nil {
log.Fatalf("Could not purge resource: %s", err)
}
os.Exit(code)
}
In the pipeline the
file can be something like this for PostgreSQL (use tmpfs/inmem version for database data directory to make it faster):.gitlab-ci.yml
test:
stage: test
image: golang:1.16.4
dependencies: []
services:
- postgres:13-alpine # TODO: create a tmpfs version
tags:
- cicd
variables:
ENV: test
POSTGRES_DB: postgres
POSTGRES_HOST: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: "5432"
POSTGRES_USER: postgres
script:
- source env.sample
- go test
The dockerfile for tmpfs database if using MySQL can be something like this:
FROM circleci/mysql:5.5
RUN echo '\n\
[mysqld]\n\
datadir = /dev/inmemory/mysql\n\
' >> /etc/mysql/my.cnf
Or for MongoDB:
FROM circleci/mongo:3.6.9
RUN sed -i '/exec "$@"/i mkdir \/dev\/inmemory\/mongo' /usr/local/bin/docker-entrypoint.sh
CMD ["mongod", "--nojournal", "--noprealloc", "--smallfiles", "--dbpath=/dev/inmemory/mongo"]
The benefit of this classical integration test approach:
+ high confidence that your SQL statements are correct, can detect typos (wrong column, wrong table, etc)
+ isolated test, not testing business logic but only data provider layer, also can test for schema migrations
- not a good approach for database with eventual consistency (eg. Clickhouse)
- since this is an integration test, it would be slower than mock/fake unit test (1-3s+ total delay overhead when spawning docker)
Conclusion
use mock for databases with eventual consistency
prefer fake over mock for business logic correctness because it’s better for maintainability to test behavior (this input should give this output), instead of implementation details
prefer classical testing over mock testing for checking data provider logic correctness
References
(aka confirmation bias :3)
https://martinfowler.com/articles/mocksArentStubs.html
https://stackoverflow.com/questions/1595166/why-is-it-so-bad-to-mock-classes
https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a
https://chemaclass.medium.com/to-mock-or-not-to-mock-af995072b22e
https://accu.org/journals/overload/23/127/balaam_2108/
https://news.ycombinator.com/item?id=7809402
https://philippe.bourgau.net/careless-mocking-considered-harmful/
https://debugged.it/blog/mockito-is-bad-for-your-code/
https://engineering.talkdesk.com/double-trouble-why-we-decided-against-mocking-498c915bbe1c
https://blog.thecodewhisperer.com/permalink/you-dont-hate-mocks-you-hate-side-effects
https://agilewarrior.wordpress.com/2015/04/18/classical-vs-mockist-testing/
https://www.slideshare.net/davidvoelkel/mockist-vs-classicists-tdd-57218553
https://www.thoughtworks.com/insights/blog/mockists-are-dead-long-live-classicists
https://stackoverflow.com/questions/184666/should-i-practice-mockist-or-classical-tdd
https://bencane.com/2020/06/15/dont-mock-a-db-use-docker-compose/
https://swizec.com/blog/what-i-learned-from-software-engineering-at-google/#stubs-and-mocks-make-bad-tests
https://www.freecodecamp.org/news/end-to-end-api-testing-with-docker/
https://medium.com/@june.pravin/mocking-is-not-practical-use-fakes-e30cc6eaaf4e
https://www.c-sharpcorner.com/article/stub-vs-fake-vs-spy-vs-mock/