My Journey with gRPC: From Skepticism to Enthusiasm
As a developer, I've known about gRPC for quite some time. Its reputation for high performance and efficiency in microservices communication always intrigued me, but until recently, I had never practically implemented it. In one of my latest projects, I built a microservices-based system in which gRPC became invaluable. In this article, I'll share my experience transitioning from skepticism to enthusiasm, the challenges I faced, and why I now see gRPC as a powerful tool for "internal microservices communication."
The Context: A Microservices System with a Database-Centric Service
My project involved designing a microservices architecture with multiple services built using .NET and NextJs. One of these services was a dedicated database microservice, acting as a data provider for others, not intended to be exposed to the outside world. Initially, I considered writing this just as one more class liibrary project in .NET which would be consumed by the other projects, this would be the most performant way of implementing this but it makes the overall system tightly coupled and less scalable, moreover this service needs to be used by a NextJs application which means I have to write a wrapper api just to expose this to NextJs. Each of my services had a dedicated job assigned to them, baking this dependency would defy their purpose. Hence, I started looking for a different approach where I could build this service as an independent service that other services can consume. Now, the de facto approach, at least for me, was to use REST. However, I quickly realised that this might introduce performance bottlenecks, particularly due to network latency. The database service would be queried frequently by other services, and the overhead of HTTP/1.1 and JSON serialization in REST could slow things down.
This led me to explore gRPC, a high-performance RPC framework that leverages HTTP/2 and Protocol Buffers (Protobuf) for efficient communication. I had heard about its benefits--low latency, compact binary payloads, and the ability to call remote endpoints as if they were local methods across different technologies--but I was hesitant. The setup seemed complex, especially for a mixed-tech stack involving .NET and NextJs. Was it worth the effort? I decided to dive in and find out.
The Initial Hurdles: Protobuf and Cross-Platform Setup
My first impression of gRPC wasn't exactly love at first sight. The setup process felt more complicated than it needed to be, especially compared to the simplicity of spinning up a REST API. The biggest hurdle was configuring Protocol Buffers, the schema definition language used by gRPC to define service contracts and messages.
Here's what made the initial experience daunting:
- Protobuf Configuration: Writing .proto files to define services and messages required a shift in mindset. Unlike REST, where you can quickly define endpoints with minimal setup, Protobuf demands upfront schema design.
- Cross-Platform Integration: My project used .NET for backend services and NextJs for the frontend. Generating gRPC client and server code for both environments seemed repetitive and challenging.
- Learning Curve: The gRPC ecosystem, with its concepts like unary calls, streaming, and service definitions, felt overwhelming at first, especially for someone used to REST's simplicity.
At this stage, I questioned whether gRPC was worth the effort. The setup felt like overengineering for a problem that REST could handle, but less efficiently. However, I pushed through, motivated by the promise of performance gains.
The Turning Point: Seeing gRPC in Action
Once I got past the initial setup and had my service communicating with other services via gRPC, everything clicked. The experience of using gRPC was nothing short of transformative. Here's what stood out:
-
Remote Calls Felt Local: I have heard the gRPC's promise of making remote procedure calls (RPCs) feel like local method calls, never really sounded like a useful feature. But now, after implementing it and using it, it all made sense. The ability to just call a remote code as if it were just another function seemed very intuitive and kept me wondering why the REST is not implemented this way (obviously it couldn't be, hence RPC).
-
Type Safety and Contract Clarity: Protobuf's strongly typed schemas ensured that both the client and server adhered to the same contract. This reduced errors and made refactoring easier compared to REST's often loosely defined JSON payloads.
-
Steaming Capabilities: Although my initial use case didn't require it, gRPC's support for bidirectional streaming opened up exciting possibilities for my future enhancements and projects.
-
Performance Gains: At this point in time, I could not validate the actual performance difference, the reason.. I'm not being incentivised enough to do it as I am already sold by the first 2 reasons themselves. But obviously, gRPC has proven to be way more performant than REST in general, so I would take this for now.
The moment I saw these benefits in action, my skepticism faded. gRPC (RPC) wasn't just a buzzword--it was delivering tangible value to my project
Where gRPC Shines
For now, I'm convinced that gRPC is an excellent choice for communication within a cluster or local network, as in my microservices setup. The low latency and high throughput make it ideal for scenarios where services frequently exchange data. The strict schema nature of Protobuf works well when you control both the client and server, ensuring consistency across services.
However, I'm not yet sold on using gRPC for typical client-server architectures used in Web Development. The reasons are as follows:
- Browser Support: gRPC's reliance on HTTP/2 and Protobuf makes it less straightforward for browser-based clients. While there are various libraries out there to bridge this gap but they add complexity compared to REST's universal compatibility.
- Public APIs: For external-facing APIs consumed by third parties, REST's simplicity and human-readable JSON payloads are often more practical. gRPC's binary format and strict contracts might be overkill in such cases.
That said, I'm open to exploring gRPC for internet-facing use cases when the right project comes along (I am planning to delve into tRPC soon). My perspective might evolve once I encounter a real-world scenario that demands gRPC's strengths in a typical client-server context.
Lessons Learned and Final Thoughts
My journey with gRPC taught me a few key lessons:
- Don't Judge by the Setup: The initial complexity of gRPC can be intimidating, but the effort pays off once you experience its benefits. Invest time in understanding Protobuf (still in progress) and the gRPC ecosystem -- it's worth it.
- Choose the Right Tool for the Job: gRPC isn't a one-size-fits-all solution. It excels in high performance, internal communication, but may not always be the best fit for public APIs or browser-based applications.
- Embrace the Learning Curve: Like any new technology, gRPC requires an upfront investment in learning. But once you get the hang of it, it becomes a powerful addition to your toolkit.
In conclusion, implementing gRPC in my microservices project was a rewarding experience. It transformed how I think about service-to-service communication and opened my eyes to the power of RPC in modern architectures. If you're building a microservices system and need efficient, low-latency communication, I highly recommend giving gRPC a try. It might feel daunting at first, but the clean abstractions and performance gains make it way worthy investment.
-- Kalyan Note: This article was written with AI assistance.