Showing posts with label mocking. Show all posts
Showing posts with label mocking. Show all posts

2021-11-17

Alternative Strategy for Dependency Injection (lambda-returning vs function-pointer)

There's some common strategy for injecting dependency (one or sets of function) using interface, something like this:

type Foo interface{ Bla() string }
type RealAyaya struct {}
func(a *RealAyaya) Bla() {}
type MockAyaya struct {} // generated from gomock or others
func(a *MockAyaya) Bla() {}
// real usage:
deps := RealAyaya{}
deps.Bla()
// test usage:
deps := MockAyaya{}
deps.Bla()

and there's another one (dependency parameter on function returning a lambda):

type Bla func() string
type DepsIface interface { ... }
func NewBla(deps DepsIface) Bla {
  return func() string {
    // do something with deps
  }
// real usage:
bla := NewBla(realDeps)
res := bla()
// test usage:
bla := NewBLa(mockedOrFakeDeps)
res := bla()

and there other way by combining both fake and real implementation like this, or alternatively using proxy/cache+codegen if it's for 3rd party dependency.
and there other way (plugggable per-function level):

type Bla func() string
type BlaCaller struct {
  BlaFunc Bla
}
// real usage:
bla := BlaCaller{ BlaFunc: deps.SomeMethod }
res := bla.BlaFunc()
// test usage:
bla := BlaCaller{ BlaFunc: func() string { return `fake` } }
res := bla.BlaFunc()

Analysis


The first one is the most popular way, the 2nd one is one that I saw recently (that also being used in openApi/swagger codegen, i forgot which library), the bad part is that we have to sanitize the trace manually because it would show something like NewBla.func1 in the traces, and we have to use generated mock or implement everything if we have to test. Last style is what I thought when writing some task, where the specs still unclear whether I should:
1. query from local database
2. hit another service
3. or just a fake data (in the tests)
I can easily switch out any function without have to depend on whole struct or interface, and it would be still easy to debug (set breakpoint) and jump around the method, compared to generated mock or interface version.
Probably the bad part is, we have to inject every function one by one for each function that we want to call (which nearly equal effort as the 2nd one). But if that's the case, when your function requires 10+ other function to inject, maybe it's time to refactor?

The real use case would be something like this:

type LoanExpirationFunc func(userId string) time.Time 
type InProcess1 struct {
  UserId string 
// add more input here
  LoanExpirationFunc LoanExpirationFunc
  // add more injectable-function, eg. 3rd party hit or db read/save
}
type OutProcess1 struct {}
func Process1(in *InProcess1) (out *OutProcess1) {
  if ... // eg. validation
  x := in.LoanExpirationFunc(in.UserId) 
  // ... // do something
}

func defaultLoanExpirationFunc(userId string) time.Time {
  // 
eg. query from database
}

type thirdParty struct {} // to put dependencies
func NewThirdParty() (*thirdParty) { return &thirdParty{} }
func (t *thirdParty) extLoanExpirationFunc(userId string) time.Time {
  // eg. hit another service
}

// init input:
func main() {
  http.HandleFunc("/case1", func(w, r ...) {
    in := InProcess1{LoanExpirationFunc: defaultLoanExpirationFunc}
    in.ParseFromRequest(r)
    out := Process1(in)  
    out.WriteToResponse(w)
  })
  tp := NewThirdParty()
  http.HandleFunc("/case2", func(w, r ...) {
    in := InProcess1{LoanExpirationFunc: tp.extLoanExpirationFunc}
    in.ParseFromRequest(r)
    out := Process1(in)  
    out.WriteToResponse(w)
  })
}

// on test:
func TestProcess1(t *testing.T) {
  t.Run(`test one year from now`, func(t *testing.T) {
    in := inProcess1{LoanExpirationFunc: func(string) { return time.Now().Add(1, 0, 0) }}
    out := Process1(in)
    assert.Equal(t, out, ...)
  })
}

Haven't using this strategy extensively on new a project (since I just thought about this today and yesterday when creating horrid integration test), but I'll update this post when I found annoyance with this strategy.
 
UPDATE 2022: after using this strategy extensively for a while, this one is better than interface (especially when using IntelliJ), my tip: it would be better if you use function pointer name and injected function name with same name.

2021-07-30

Mock vs Fake and Classical Testing

Motivation of this article is to promote less painful way of testing, structuring codes, and less broken test when changing logic/implementation details (only changing the logic not changing the input/output). This post recapping past ~4 years compilation of articles that conclude that Fake > Mock, Classical Test > Mock Test from other developers that realize the similar pain points of popular approach (mock).

Mock Approach

Given a code like this:

type Obj struct {
*sql.DB // or Provider
}
func (o *Obj) DoMultipleQuery(in InputStruct) (out OutputStruct, err error) {
... = o.DoSomeQuery()
... = o.DoOtherQuery()
}

I’ve seen code to test with mock technique like this:

func TestObjDoMultipleQuery(t *testing.T) {
o := Obj{mockProvider{}}
testCases := []struct {
name string
mockFunc func(sqlmock.Sqlmock, *gomock.Controller) in InputStruct out OutputStruct
shouldErr bool
} {
{
name: `bast case`,
mockFunc: func(db sqlmock.SqlMock, c *gomock.Controller) {
db.ExpectExec(`UPDATE t1 SET bla = \?, foo = \?, yay = \? WHERE bar = \? LIMIT 1`).
WillReturnResult(sqlmock.NewResult(1,1))
db.ExpectQuery(`SELECT a, b, c, d, bar, bla, yay FROM t1 WHERE bar = \? AND state IN \(1,2\)`).
WithArgs(3).
WillReturnRows(sqlmock.NewRows([]string{"id", "channel_name", "display_name", "color", "description", "active", "updated_at"}).
AddRow("2", "bla2", "Bla2", "#0000", "bla bla", "1", "2021-05-18T15:04:05Z").
AddRow("3", "wkwk", "WkWk", "#0000", "wkwk", "1", "2021-05-18T15:04:05Z"))
...
}, in: InputStruct{...}, out: OutputStruct{...},
wantErr: false,
},
{
... other cases
},
} for _, tc := range testCases { t.Run(tc.name, func(t *testing.T){ ... // prepare mock object o := Obj{mockProvider} out := o.DoMultipleQueryBusinessLogic(tc.in) assert.Equal(t, out, tc.out) }) }
}

This approach has pros and cons:

+ could check whether has typos (eg. add one character in the original query, this test would detect the error)

+ could check whether some queries are properly called, or not called but expected to be called

+ unit test would always faster than integration test

- testing implementation detail (easily break when the logic changed)

- cannot check whether the SQL statements are correct

- possible coupled implementation between data provider and business logic

- duplicating work between original query and the regex-version of query, which if add a column, we must change both implementation

For the last cons, we can change it to something like this:

db.ExpectQuery(`SELECT.+FROM t1.+`).
WillReturnRows( ... )

This approach has pros and cons:

+ not deduplicating works (since it just a simplified regex of the full SQL statements

+ still can check whether queries properly called or not

+ unit test would always faster than integration test

- testing implementation detail (easily break when the logic changed)

- cannot detect typos/whether the query no longer match (eg. if we accidentally add one character on the original query that can cause sql error)

- cannot check correctness of the SQL statement

- possible coupled implementation between data provider and business logic

We could also create a helper function to replace the original query to regex version:

func SqlToRegexSql(sql string) string {
return // replace special characters in regex (, ), ?, . with escaped version
}
db.ExpectQuery(SqlToRegexSql(ORIGINAL_QUERY)) ...

This approach has same pros and cons as previous approach.

Fake Approach

Fake testing use classical approach, instead of checking implementation detail (expected calls to dependency), we use a compatible implementation as dependency (eg. a slice/map of struct for database table/DataProvider)

Given a code like this:

type Obj struct {
FooDataProvider // interface{UpdateFoo,GetFoo,...}
}
func (o *Obj) DoBusinessLogic(in *Input) (out *Output,err error) {
... = o.UpdateFoo(in.bla)
... = o.GetFoo(in.bla)
...
}

It’s better to make a fake data provider like this:

type FakeFooDataProvider struct {
Rows map[int]FooRow{} // or slice
}
func (f *FakeFooDataProvider) UpdateFoo(a string) (...) { /* update Rows */}
func (f *FakeFooDataProvider) GetFoo(a string) (...) { /* get one Rows */}
... // insert, delete, count, get batched/paged

So in the test, we can do something like this:

func TestObjDoBusinessLogic(t *testing.T) {
o := Obj{FakeFooDataProvider{}}
testCases := []struct{
name string
in Input
out Ouput
shouldErr bool
} {
{
name: `best case`,
in: Input{...},
out: Output{...},
shouldErr: false,
},
{
...
},
} for _, tc := range testCases { t.Run(tc.name, func(t *testing.T){ out := o.DoBusinessLogic(tc.in) assert.Equal(t, tc.out, out) }) }
}

This approach have pros and cons:

+ testing behavior (this input should give this output) instead of implementation detail (not easily break/no need to modify the test when algorithm/logic changed)

+ unit test would always faster than integration test

- cannot check whether the queries are called or not called but expected to be called

- double work in Golang (since there’s no generic/template yet, go 1.18 must wait Feb 2022), must create minimal fake implementation (map/slice) that simulate basic database table logic, or if data provider not separated between tables (repository/entity pattern) must create a join logic too – better approach in this case is to always create Insert, Update, Delete, GetOne, GetBatch instead of joining.

+ should be no coupling between queries and business logic

Cannot check whether queries in data provider are correct (which should not be the problem of this unit, it should be DataProvider integration/unit test’s problem, not this unit)

Classical Approach for DataProvider

It’s better to test the queries using classical (black box) approach integration test instead of mock (white box), since mock and fake testing can only test the correctness of business logic, not logic of the data provider that mostly depend to a 2nd party (database). Fake testing also considered a classical approach, since it test input/output not implementation detail.

Using dockertest when test on local and gitlab-ci service when test on pipeline, can be something like this:

var testDbConn *sql.DB
func TestMain(m *testing.M) int { // called before test
if env == `` || env == `development` { // spawn dockertest, return connection to dockertest
prepareDb(func(db *sql.DB){
testDbConn = db
if db == nil {
return 0
}
return m.Run()
})
} else {
// connect to gitlab-ci service var err error testDbConn, err = ... // log error
}
}
func TestDataProviderLogic(t *testing.T) {
if testDbConn == nil {
if env == `` || env == `development` || env == `test` {
t.Fail()
}
return
}
f := FooDataProvider{testDbConn}
f.InitTables()
f.MigrateTables() // if testing migration
// test f.UpdateFoo, f.GetFoo, ...
}

Where the prepareDb function can be something like this (taken from dockertest example):

func prepareDb(onReady func(db *sql.DB) int) {
const dockerRepo = `yandex/clickhouse-server`
const dockerVer = `latest`
const chPort = `9000/tcp`
const dbDriver = "clickhouse"
const dbConnStr = "tcp://127.0.0.1:%s?debug=true"
var err error
if globalPool == nil {
globalPool, err = dockertest.NewPool("")
if err != nil {
log.Printf("Could not connect to docker: %s\n", err)
return
}
}
resource, err := globalPool.Run(dockerRepo, dockerVer, []string{})
if err != nil {
log.Printf("Could not start resource: %s\n", err)
return
}
var db *sql.DB
if err := globalPool.Retry(func() error {
var err error
db, err = sql.Open(dbDriver, fmt.Sprintf(dbConnStr, resource.GetPort(chPort)))
if err != nil {
return err
}
return db.Ping()
}); err != nil {
log.Printf("Could not connect to docker: %s\n", err)
return
}
code := onReady(db)
if err := globalPool.Purge(resource); err != nil {
log.Fatalf("Could not purge resource: %s", err)
}
os.Exit(code)
}

In the pipeline the .gitlab-ci.yml file can be something like this for PostgreSQL (use tmpfs/inmem version for database data directory to make it faster):

test:
stage: test
image: golang:1.16.4
dependencies: []
services:
- postgres:13-alpine # TODO: create a tmpfs version
tags:
- cicd
variables:
ENV: test
POSTGRES_DB: postgres
POSTGRES_HOST: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_PORT: "5432"
POSTGRES_USER: postgres
script:
- source env.sample
- go test

The dockerfile for tmpfs database if using MySQL can be something like this:

FROM circleci/mysql:5.5

RUN echo '\n\
[mysqld]\n\
datadir = /dev/inmemory/mysql\n\
' >> /etc/mysql/my.cnf

Or for MongoDB:

FROM circleci/mongo:3.6.9

RUN sed -i '/exec "$@"/i mkdir \/dev\/inmemory\/mongo' /usr/local/bin/docker-entrypoint.sh

CMD ["mongod", "--nojournal", "--noprealloc", "--smallfiles", "--dbpath=/dev/inmemory/mongo"]

The benefit of this classical integration test approach:

+ high confidence that your SQL statements are correct, can detect typos (wrong column, wrong table, etc)

+ isolated test, not testing business logic but only data provider layer, also can test for schema migrations

- not a good approach for database with eventual consistency (eg. Clickhouse)

- since this is an integration test,  it would be slower than mock/fake unit test (1-3s+ total delay overhead when spawning docker)

Conclusion

  1. use mock for databases with eventual consistency

  2. prefer fake over mock for business logic correctness because it’s better for maintainability to test behavior (this input should give this output), instead of implementation details

  3. prefer classical testing over mock testing for checking data provider logic correctness

References

(aka confirmation bias :3)

https://martinfowler.com/articles/mocksArentStubs.html
https://stackoverflow.com/questions/1595166/why-is-it-so-bad-to-mock-classes
https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a
https://chemaclass.medium.com/to-mock-or-not-to-mock-af995072b22e
https://accu.org/journals/overload/23/127/balaam_2108/
https://news.ycombinator.com/item?id=7809402
https://philippe.bourgau.net/careless-mocking-considered-harmful/
https://debugged.it/blog/mockito-is-bad-for-your-code/
https://engineering.talkdesk.com/double-trouble-why-we-decided-against-mocking-498c915bbe1c
https://blog.thecodewhisperer.com/permalink/you-dont-hate-mocks-you-hate-side-effects
https://agilewarrior.wordpress.com/2015/04/18/classical-vs-mockist-testing/
https://www.slideshare.net/davidvoelkel/mockist-vs-classicists-tdd-57218553
https://www.thoughtworks.com/insights/blog/mockists-are-dead-long-live-classicists
https://stackoverflow.com/questions/184666/should-i-practice-mockist-or-classical-tdd
https://bencane.com/2020/06/15/dont-mock-a-db-use-docker-compose/
https://swizec.com/blog/what-i-learned-from-software-engineering-at-google/#stubs-and-mocks-make-bad-tests

https://www.freecodecamp.org/news/end-to-end-api-testing-with-docker/
https://medium.com/@june.pravin/mocking-is-not-practical-use-fakes-e30cc6eaaf4e 
https://www.c-sharpcorner.com/article/stub-vs-fake-vs-spy-vs-mock/