gRPC - a modern framework for microservices communication

High-performance remote procedure call framework

Companies everywhere are realizing the benefits of building a microservices-based architecture. From lower costs to better performance to less downtime to the ability to scale, microservices provide countless benefits relative to monolithic designs. But when a monolith is broken down into multiple microservices, one big question remains -  how do these services talk to each other? In the article I'm going to go over gRPC - how it works, how to use it for efficient microservices communication, and the pros and cons of using gRPC in your microservices architecture.

REST, the default choice

Before we get into gRPC, let’s talk about the alternatives. The default choice is often REST–which stands for REpresentational State Transfer. REST is an architectural style originally described by Roy Fielding in a doctoral dissertation in 2000. In the REST architectural style, data and functionality are considered resources and are accessed using Uniform Resource Identifiers (URIs). The resources are acted upon using well defined operations known as request methods. The REST architectural style constrains an architecture to a client/server architecture and uses a stateless communication protocol, typically HTTP. To get a better understanding of the basics of REST, go here–https://en.wikipedia.org/wiki/Representational_state_transfer.

REST is great...

  • Easy to understand (text protocol)
  • Web Infrastructure already built on top of HTTP
  • Loose coupling between client and server
  • Great tooling for testing, inspection, and modification
  • High-quality HTTP implementations in every language

But REST has limitations...

  • No single standard for API contract, thereby requiring the developers to write client libraries
  • Streaming is difficult, even impossible, in some languages
  • Operations are difficult to model
  • REST’s preferred way of structuring data i.e JSON (textual representations) is not optimal for networks

So does gRPC fix all these issues?

Enter gRPC

teal grpc logo next to stacked red, yellow, green, and blue rectangles with black text

There is a place for RPC style communication in the world of backend to backend server communications–but it can only be great if it is interoperable, simple to use, and efficient. gRPC was specifically designed to be simple to use, it was designed from the ground up to automatically generate idiomatic client and server stubs.

What is gRPC?

What is gRPC? gRPC is the modern, lightweight communication protocol from Google. gRPC is a high-performance, open source, universal RPC framework that can run in any environment.

It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication.The g in gRPC does not stand for Google. It is a recursive acronym that stands for grpc remote procedure call. gRPC originated from Google in 2015. It was based on an internal Google project called Stubby which was an internal framework for gRPC, but just for Google services. Nowadays Stubby has been rebranded gRPC and is a free open source project with an open spec and roadmap.

Google designed gRPC to be performant and as efficient as possible. The structure of the protocol itself is lean, with the minimal processing occurring at the marshaling and unmarshaling stage. Because of this, gRPC is inherently efficient, made only better by building upon http/2 which enables highly effective use of network resources. What you end up with is a lean platform using a lean transport system to deliver lean bits of code—an overall decrease in latency and size. gRPC was designed from the ground up to not only have an effective built-in authentication system, but to support a wide array of authentication solutions. The supported mechanism that is baked into the protocol–SSL/TLS–is supported with and without Google’s token-based systems for authentication.

Protocol buffers or proto

gRPC’s secret sauce lies in the way the serialization is handled. It is based on protocol buffers, an open source mechanism for serializing structured data, which is language and platform neutral.

  • Efficient: Protocol buffers are verbose and descriptive. But they are smaller, faster,  more efficient, and provide high performance.
  • Machine readable: Protocol buffers are binary or machine readable and can be used to exchange messages between services and not over browsers.
  • Generators: With a compiler, Protocol buffers can be easily compiled to source code  along with runtime libraries for your choice of programming language. This makes serialization or deserialization easier, with no need for hand parsing.
  • Supports types: Unlike JSON, we can specify field types and add validations for the same in the .proto file.

Getting started with gRPC

Below are the basic steps a developer should follow to get started with gRPC:

  1. Define service definition file - a high level description of services exposed and what are methods on these services - in the Protocol Buffer (.proto) file.
  2. Generate the server and client side stubs from the .proto file.
  3. Implement the server in one of the supported languages.
  4. Implement the client that invokes the service through the stub.
  5. Run the server and the client.

At a very high level, let's imagine you have a Java service. You generate a gRPC service from the service definition and you have clients talking to that service (you can have a mobile client or a desktop client talking to the service here) through the stubs you’ve generated and all the connection details are abstracted. Then, your Java service could be talking to a Python service using a stub. In fact, it could happen that your Python service talks to two other services, let’s say a Go service and C++ service, again through stubs.

flow chart showing communication between client and server using computer and server icons, hexagons and arrows

The beauty of this model is that communication between client and server, and between microservices, all happen through stubs that gRPC handles. Additionally, the other thing is the multilingual support (Java, Python). If you have different teams working on different languages - such as Python and Go and C++ from our example - it's not a problem, All they have to agree on is a service definition contract.

How RPC works

flow chart using blue, red, green, and orange rectangles and black arrows and black text

Following the numbered arrows in the diagram above:

  1. A client application makes a local procedure call to the client stub containing the parameters to be passed on to the server.
  2. The client stub serializes the parameters through a process called marshalling.
  3. The client stub forwards the request to the local client time library.
  4. The local client time library forwards the request to the server stub.
  5. The server run-time library receives the request and calls the server stub procedure.
  6. The server stub unmarshalls (unpacks) the passed parameters.
  7. The server stub calls the actual procedure.
  8. The server stub sends back a response to the client-stub in the same fashion.

Connection Options

  • Unary RPC: Unary RPCs where the client sends a single request to the server and gets a single response back, just like a normal function call.
  • Server Streaming RPC: The client sends a single request to the server and gets a stream to read a sequence of messages back. The client reads from the returned stream until there are no more messages.
  • Client Streaming RPC: The client sends a sequence of messages to the server using a provided stream. Once the client has finished writing the messages, it waits for the server to read them and return its response.
  • Bidirectional Streaming RPC: Both sides send a sequence of messages using a read-write stream. The two streams operate independently. The order of messages in each stream is preserved.

Multi-Language Support

2 tables with black cell borders and black text

Advantages of gRPC

  1. Functional rather than resource-based design: Services with remote procedure calls (RPC), along with message formats, are defined using Protobuf. Rather than being required to define a resource and apply a corresponding lifecycle, those in favor of gRPC tend to prefer a more functional design approach to their APIs.
  2. Reduced network latency: gRPC builds on HTTP/2, which allows for faster and long-lived connections, reducing the time for setup/teardown common for individual HTTP/1.x requests.
  3. Infrastructure support: Those selecting gRPC are often using Kubernetes on Google Kubernetes Engine (GKE), which provides built-in proxy and load balancing support.
  4. Bi-directional support: gRPC takes advantage of HTTP/2’s bi-directional communication support, removing the need to separately support request/response alongside websockets, SSE, or other push-based approaches on top of HTTP/1.
  5. Code generation: Considered a first-class part of the gRPC methodology, code generation is used on top of the Protobuf format to define both message formats and service endpoints. Code generation is then able to produce server-side skeletons and client-side network stubs to shorten the development cycle.
  6. Documentation: Since gRPC started within Google, documentation is extensive on the gRPC website. The developer experience is excellent, making developers productive quickly in whatever language(s) they use.

Disadvantages of gRPC

  1. Lack of consistent error handling: While gRPC describes the concept of a status code and message, there is no clear and consistent way to properly catch the errors across programming languages. As such, someone has built a guide to explain how error handling should work across a variety of languages (http://avi.im/grpc-errors/). Also, keep in mind that the response message is a string rather than a message format, so clients won’t be given additional details upon a failure.
  2. Lack of developer tooling: While many tools we use today are designed for HTTP/1, moving to HTTP/2 and Protobuf requires a new set of tools. Some tools are starting to emerge, but barring a few projects, most have not gained traction yet. As such, most tools that only support HTTP/1 are not useful in the gRPC development process.
  3. Lack of infrastructure and monitoring support outside of GKE: Developers are at a disadvantage unless they are using GKE infrastructure. It will be necessary to establish a reverse proxy to log incoming requests, monitor usage analytics, enforce security rules, and perform internal routing to services.
  4. Limited insight into common practices: While well documented, there are limited stories around what has been working, what workarounds are required, and how to support gRPC in production. This will likely be resolved with time, but currently it is early and common practices and anti-patterns have yet to be established.
  5. Lack of edge caching: While HTTP supports intermediaries for edge caching, gRPC calls use the POST method, which is neither safe nor idempotent. As such, responses cannot be cached through intermediaries as is the case with REST-based APIs that use the GET verb for resource representation requests. Additionally, the gRPC specification makes no provision, nor indicates that they wish to do so, for cache semantics between client and server.
  6. Lack of support for additional content types: Since gRPC depends upon Protobuf, other content types are not supported out-of-the-box as with standard HTTP + REST-based APIs. Similarly, image upload support is not supported. For those teams that require support, REST-based APIs are still the best option.

Where to use gRPC

  1. Microservices: gRPC shines as a way to connect servers in service-oriented environments. One of the original problems its predecessor, Stubby, aimed to solve was wiring together microservices. It is well-suited for a wide variety of arenas: from medium and large enterprises systems all the way to “web-scale” eCommerce and SaaS offerings.
  2. Client-server applications: gRPC works just as well in client-server applications, where the client application runs on desktop or mobile devices. It uses HTTP/2, which improves on HTTP 1.1 in both latency and network utilization.
  3. Integrations and APIs: gRPC is also a way to offer APIs over the internet for integrating applications with services from third-party providers. As an example, many of Google’s Cloud APIs are exposed via gRPC.

Conclusion

To conclude, REST has been around for 20 years and gRPC is not a replacement for REST. gRPC APIs can offer huge performance improvements and reduced response time as compared to REST APIs, but which approach to choose boils down to what fits your particular use case.

Many major companies such as Square, Lyft and Netflix have adopted gRPC as a means for microservices communication. If your use case falls into one of the categories described above it’s definitely worth looking into.

Photo created by bannafarsai - www.freepik.com


Harish Kathpalia, Lead Software Engineer

Lead Software Engineer at Capital One with a proficient software development background. Passionate about microservices based architecture, cloud computing, technology trends and improving the observability of applications.

Yes, We’re Open Source!

Learn more about how we make open source work in our highly regulated industry.

Related Content