So how can we gain the performance benefits of concurrent code without setting up a shower of goroutines? Well, one nice way to limit concurrency in go is by using a Buffered Channel Semaphore. Buffered channels in go are a nice and easy way to block execution based on the number of concurrent units of actions we want to have.
We set a buffered channel with a capacity of 100, and when we spawn the goroutines to perform the asynchronous computation, we revert to using the synchronous version of MergeSort if there are already 100 workers (goroutines) busy with the computation:
Source : https://making.pusher.com/golangs-real-time-gc-in-theory-and-practice/Conclusion
The key takeaway from this investigation is that GCs are either optimized for lower latency or higher throughput. They might also perform better or worse at these depending on the heap usage of your program. (Are there a lot of objects? Do they have long or short lifetimes?)
It is important to understand the underlying GC algorithm in order to decide whether it is appropriate for your use-case. It’s also important to test the GC implementation in practice. Your benchmark should exhibit the same heap usage as the program you intend to implement. This will check the effectiveness of the GC implementation in practice. As we saw, the Go implementation is not without faults, but in our case the issues were acceptable. I would love to see the same benchmark in more languages if you would like to contribute :)
Despite some issues, Go’s GC performs well compared to other GCed languages. The Go team have been improving the latency, and continue to do so. We’re happy with Go’s GC, in theory and practice.
0