- using dynamically-typed language (JS, Python, PHP, Ruby, etc) just because it's the most popular language -- only for short/discardable project
- mocking -- there's better way
- microservice without properly splitting domain -- modular monolith is better for small teams, introducing network layer just to split a problem without properly assessing surely will be a hassle in a short and long run
- overengineering -- eg. adding stack that you don't need when current stack suffice, for example, dockerizing or kubernetesizing just because everyone using it, adding ElasticSearch just because it's search use case, but the records needs to be searched are very little and rps are very low, a more lightweight aproach more make sense: eg. TypeSense or MeiliSearch or even database's built-in FTS are more make sense for lower rps target/simpler search feature.
- premature "clean architecture" -- aka. over-layering everything that you'll are almost never replace -- dependency tracking is better
- unevaluated standard -- sticking with standard just because it's a standard, just like being brainwashed/peer-pressured by dead people's will (tradition) without rethinking is it still make sense to be followed for this use case?
- not making SRS/Software Requirement Specification (roles/who can do what action/API) and SDS/Software Design Specification (this action/API will mutate/command or read/query which datastore or hit which 3rd party) -- this helps new guy to be onboarded to the project really fast
type Bla interface {
Get(string) string
Set(string)
}
struct RealBla struct {} // wraps a 3rd party/client library
func (*RealBla) Get(string) string { return `` }
func (*RealBla) Set(string) { }
struct FakeBla struct {} // our fake/stub/mock implementation
func (*FakeBla) Get(string) string { return `` }
func (*FakeBla) Set(string) { }
// usage
func TestBla(t *testing.T) {
var b Bla = FakeBla{...}
// usually as data member of other method that depends on RealBla
b.Set(...)
x := b.Get(...)
}
func main() {
var b Bla = RealBla{...}
b.Set(...)
x := b.Get(...)
}
the problem with this approach is, it's harder to jump around between declaration and implementation (usually RealBla that we want, not FakeBla), how often we switch implementation anyway? YAGNI (vs overengineering). It's better for our cognitive/understanding that we keep both coupled, this violates single responsibility principle from SOLID, but it's easier to reason/understand, since the real and fake are in the same file and near each other, so we can catch bug easily without have to switch, something like this:
struct BlaWrapper {
// declare/use 3rd party client here
UseFake bool
// create fake/in-mem here
// create fake/in-mem here
}
func (b *BlaWrapper) Get(s string) string {
if b.UseFake {
// do with fake
func (b *BlaWrapper) Get(s string) string {
if b.UseFake {
// do with fake
return
}
// do with real 3rd party
}
func (b *BlaWrapper) Set(s string) {
if b.UseFake {
// do with fake
}
// do with real 3rd party
}
func (b *BlaWrapper) Set(s string) {
if b.UseFake {
// do with fake
return
}
// do with real 3rd party
}
// do with real 3rd party
}
// usage
func TestBla(t *testing.T) {
var b = BlaWrapper{UseFake:true,...}
func TestBla(t *testing.T) {
var b = BlaWrapper{UseFake:true,...}
b.Set(...)
x := b.Get(...)
}
func TestBla(t *testing.T) {
var b = BlaWrapper{...}
x := b.Get(...)
}
func TestBla(t *testing.T) {
var b = BlaWrapper{...}
b.Set(...)
x := b.Get(...)
}
by doing this, we could compare easily between our fake and real implementation (you could easily spot the bug, whether your fake implementation differ way too much from real implementation), and we can still jump around simply by ctrl+click the IDE on that function since there's only 1 implementation. The only pros I could see from doing interface-based is when you are creating a 3rd party library (eg. io.Writer, io.Reader, etc) and you have more than 2 implementation (DRY only good when its more than 2), but since you're only making this for internal project that could be easily refactored within the project itself, it doesn't make sense to abuse interface. See more tips from this video: Go Worst Practice.
x := b.Get(...)
}
by doing this, we could compare easily between our fake and real implementation (you could easily spot the bug, whether your fake implementation differ way too much from real implementation), and we can still jump around simply by ctrl+click the IDE on that function since there's only 1 implementation. The only pros I could see from doing interface-based is when you are creating a 3rd party library (eg. io.Writer, io.Reader, etc) and you have more than 2 implementation (DRY only good when its more than 2), but since you're only making this for internal project that could be easily refactored within the project itself, it doesn't make sense to abuse interface. See more tips from this video: Go Worst Practice.
After all being said, I won't use this kind of thing (UseFake property) for testing databases (2nd party), because I prefer to do integration (contract-based) testing instead of unit testing, since i'm using a fast database anyway (not a slow but popular RDBMSes).
No comments:
Post a Comment
THINK: is it True? is it Helpful? is it Inspiring? is it Necessary? is it Kind?