// import
"github.com/gofiber/fiber/v2/middleware/pprof"
// use
app.Use(pprof.New())
It would create a route called /debug/pprof that you can use, just start the server, then open that path. To profile or check heap you just need to click profile/heap link, it would wait for around 10 seconds, meanwhile it waits, you must hit other endpoints to generate traffic/function calls. After 10 seconds, it would show a download dialog to save your cpu profile or heap profile. From that file, you can run a command similar to gops, for example if you want to generate svg or generate web that shows your profiling:
pprof -web /tmp/profile # or
pprof -svg /tmp/profile # <-- file that you just downloaded
It would generate something like this:
So you can find out which function that took most of the CPU time (or if it's heap profile, which function that generates/allocate most memory usage. In my case the bottleneck is the default built-in pretty logger, it limits the number of requests it can only handle to ~9K rps for concurrency 255 on database write benchmark, that if we remove built-in logging and replace with zerolog, it can handle ~57K rps for same benchmark.
No comments:
Post a Comment
THINK: is it True? is it Helpful? is it Inspiring? is it Necessary? is it Kind?