I got a golden ticket: What I learned about APIs in my first year at Google

A year ago, I found a job in one of the world’s biggest API companies. You probably know us for Search, Mail, Maps, or any of the dozens of apps that we have in the Android and iOS app stores. Those apps range all the way from delivering YouTube, one of the largest sources of worldwide internet traffic, to fun VR demonstrations about Abbey Road and The Bohemian Rhapsody. We have a lot of apps, and another thing that we do a lot is build data centers.


The first picture above shows a Google data center in Mayes County, Oklahoma. The second is in Oregon in The Dalles, about 100 miles east of Portland. The third is in North Carolina, and the fourth is in Belgium. They’re all over the world, and you’ve probably heard that the internet runs on pipes. Well, here are the pipes.


We have lots of pipes in our data centers. For scale, there’s a bicycle in that photo.

You probably know that those bikes are for getting around our sites and that the pipes are full of water. We use the water to cool our computers, and we have a lot of computers: racks and racks and rows and rows of them. Every service that we provide at Google runs on these computers: every search that gets made, every Gmail message that gets sent, every YouTube video that gets played, and every time you or I make a wrong turn and Maps recalculates our driving directions, it happens here in one of these places.


These photos are from Council Bluffs, Iowa, and these rooms, I’m told, are really big.

These machines talk to each other a lot as these services work. None of these services — Maps, Gmail, Search — runs on a single computer. Instead, they run as distributed systems that are broken into a lot of pieces, and the pieces all talk to each other. They do that with APIs. The most recent number I’ve seen that we can share publicly is that inside Google data centers, 10 billion API calls are made every second. Imagine those log files!

I didn’t have anything to do with that. Before I joined Google, I was a one-person app development company. I had left a startup many years earlier and spent my app-making years making apps that I thought were interesting. Like most app developers, I made a living making apps for other people. I made some apps for a real estate company, a device management company, a car tech company, and I worked on lots of small startup projects. I also started a meetup group for iOS developers that has met over 100 times, and put together a couple of conferences aimed at independent iOS developers who, like me, knew a lot about making apps but weren’t always good at getting paid for that. Then in January of 2016, I joined the Google Cloud Platform team in Mountain View.

For me, this was like getting the keys to a candy factory, because as I made apps, I would always find that in order to be interesting, those apps needed to have something behind them. They needed to have some kind of online service; they needed APIs. I had become a big fan of Google’s cloud platform and cloud services, beginning with Google App Engine, because I could use these things to quickly build the APIs that I needed. And it’s not just me; app developers know that we need to have powerful backend services and APIs to make anything that’s interesting.

I knew some things about APIs before I came to Google. As an app developer, I knew that you’ve got to have APIs. I also knew that when someone tells you that they want you to build an app and they’ve got a “team” (usually just a person) working on the API, what they really mean is that the first thing you should do is write some tests to see if the API works or not… because usually it’s not quite ready.

This is true even if you’re a mobile developer at a big company. I’ve heard of mobile teams whose apps don’t work because they read API descriptions strictly, assuming some things that the developers of the APIs didn’t fully implement or didn’t notice in the spec. So as an API consumer, you’ve got to test the APIs that you use.

You also have to design those APIs. It doesn’t really work to have a team that designs APIs in a vacuum and then throws them over the wall to app developers. There are well-known fallacies of distributed computing, things that people learned long ago about computer networks that became abundantly clear to those of us who worked as mobile app developers. For example, you can’t assume that a network is reliable or that it has no latency. The lab where you write your apps is not like the world in which people use your apps.

Because the real world is very different, when an API is developed by a different team than the one that makes the apps, there’s a virtual wall keeping those mobile developers from getting data to their apps in a way that accounts for reality: the real properties of the network and the real way that those apps work.

This made me a big fan of Google’s cloud platform and of cloud platforms in general because to me, a good cloud platform made it possible for me to build an API myself. With that, I could think about tearing down that wall.

But 10 billion API calls per second is a lot of API calls. I’d never been involved in something that big. So when I walked into Google, I found that the teams thought about APIs in a different way than I had. It makes sense: when you do something 10 billion times a second, you want to pay attention to it.

But what does that mean? It means that you develop a language for talking about it. If it’s something that you’re doing all the time, you want to be able to talk and write about it with precision. You want to be able to optimize it. You also want to start doing it the same way every time you do it, so you try to standardize how you go about it.

The result is that you might look a little weird. There might be things that you do that other people just don’t understand. There were things when I went into Google that I had heard about but didn’t really understand until I got used to some of the ways that my co-workers think.

One of the things that Google does that may look weird to outsiders is called Protocol Buffers. Maybe you’ve heard of them. Inside Google, all information transmitted between computers is sent using Protocol Buffers. You’ve probably heard of JSON and XML as ways to serialize data. Like them, the Protocol Buffer Methodology provides a way to express data in a form that you can send from one place to another. This approach goes back to the first few years of Google, and worked so well that later we decided to open source it. Since then, we’ve found that other companies have started also using Protocol Buffers, including Apple, which recently published open source Swift support for Protocol Buffers.

But upon arriving at Google, I was confused by Protocol Buffers because it seemed to mean different things, and I eventually boiled it down to three. Protocol Buffers is 1. a serialization mechanism, 2. an interface description language, and 3. a methodology.

As a serialization mechanism, a protocol buffer is just a stream of bytes. It’s like XML or JSON but you can’t read it unless you know how to read bytes. Here’s a hex dump of a simple message with just one field:

0a 05 68 65 6c 6c 6f

Can you guess what it says?

It contains one string that is five characters long. The last five bytes are the ASCII characters for “hello,” then if you look back one byte, “05” is the length of that string, and if you go back to the first byte in this protocol buffer serialization, you’ll find an integer that has two parts that describe this message field. The lower three bits say that it is variable length, so we can expect to have a length after that first integer, and the other part is the field number. It doesn’t say that this field is a string; it says that its field number 1.

To get what that means, we use the Protocol Buffer language, a language that Googlers and other Protocol Buffer users employ to describe their data structures. They write a Protocol Buffer language description of their data structures, then run some tools. The first tool they run is protoc, the Protocol Buffer Compiler. It reads that file, syntax-checks, does some basic analysis of it, and then calls a plugin. Consider this command line:

protoc echo.proto -o echo.out --go_out=.

The — go_out option tells protoc to run the Go plugin to generate Go code for the data described in echo.proto. By convention, it’s called protoc-gen-go. If that’s in my path, it gets called by protoc, and it creates code that can be used to read and write protocol buffers from standard Go types. So it defines a struct for your message and some other things to help you do things with these protocol buffers. That includes code that turns your struct into a stream of bytes and a stream of bytes into your struct. It’s the kind of thing that you look at but don’t touch.

Here’s the echo.proto code.

syntax = “proto3”;
package echo;
service Echo {
  rpc Get(EchoRequest) returns (EchoResponse) {}
  rpc Update(stream EchoRequest) returns (stream EchoResponse) {}
message EchoRequest {
  string text = 1;
message EchoResponse {
  string text = 1;

It uses the Protocol Buffer language to describe a service that echoes messages, including the messages that the service sends and receives. It includes a service definition that describes two API methods. Note that these aren’t REST methods — they’re remote procedure calls, or RPC.

An RPC is like a procedure call that you would have in Python, Go, or some other language, but distributed between multiple processes and usually between multiple machines.

The first RPC is called “Get” and takes an Echo Request message and returns a Response. The second, “Update,” takes a stream of requests and returns a stream of responses. That is a kind of API that you can’t really build with REST. We’ll come back to this, but first I’d like to point out that to me, as an iOS developer, this looks familiar.

Most iOS developers use a tool called Interface Builder. It’s part of Xcode and is a visual tool that we can use to draw the user interfaces of an iOS or Mac app. You use it to draw a user interface, and a description of that interface is exported and read by your program when it runs. It’s like a code generator for a virtual machine that’s implemented by the iOS and Mac frameworks. You might have written equivalent code yourself in Objective-C or Swift, but with Interface Builder, you don’t have to, and the result is often that the generated code is higher quality and your app has fewer bugs.

Interface Builder and Protocol Buffers are a lot alike. You might even say that Protocol Buffers is Interface Builder for data. The Protocol Buffer language lets you describe your data in a way that allows you to use the Protocol Buffer methodology to get a lot of things that would previously have required writing code.

Protocol Buffers lets you describe the structures of your data. On top of that, Google is building an open source version of “Stubby,” our internal API-calling system. The new mechanism is called “gRPC,” where the “RPC” stands for “Remote Procedure Call.” Unlike Stubby, gRPC is built on HTTP/2, which allows streaming, so gRPC allows the kinds of streaming APIs that I mentioned earlier.

Here’s a really nice thing about gRPC: Because it uses Protocol Buffers, and because Protocol Buffers are supported by many languages, you can build systems in which the pieces, let’s call them microservices, are written in different languages and can all communicate using Protocol Buffers and gRPC. So your app developers and your service developers don’t have to code in the same language, and even your service developers can code in more than one language.

gRPC supports four kinds of APIs: 1. Simple ones where you send a thing and you get a thing back (these are called “unary” APIs); 2. APIs where you send a stream of things like pings and get one thing back, perhaps a summary of response times for those pings (a “client-streaming” API); 3. APIs where you make a single request and get a stream of responses, such as real-time stock quotes (“server-streaming” APIs); and 4. APIs that stream in both directions, such as chat APIs (“bidirectional-streaming” APIs). This simplifies your calling and serving infrastructure a lot, because otherwise the only way to get this kind of streaming would be to use hacks on HTTP REST calls or to develop a custom protocol.

As I mentioned, gRPC is open source and has several adopters besides Google who are using it. We’re happy to share it because if we all do this in the same way, it makes it easier for our services to all work together.

Besides gRPC, many APIs still use REST, and that includes many APIs served by Google. Google is part of the OpenAPI Initiative, a consortium of companies that is working to standardize the description of REST APIs and whose founding members include Apigee, which became a part of Google in 2016. That’s great because it allows the computing industry to do many of the same things that Google has done with Protocol Buffers. With a standard API description language, we can can generate documentation for APIs in a consistent way. We can also generate reliable client libraries and server-side code and focus on writing the core logic of our APIs and applications. The OpenAPI Initiative has many members and is constantly growing, and we think that’s great for our industry.

(photo by author)

Finally, here are a few things that I’m personally working on with others at Google to help build better API tools: Adding to the nine other languages with gRPC support, we’re building Swift support for gRPC. We’re also actively working with the OpenAPI community and building tools to read OpenAPI descriptions for use in documentation generators, code generators, and other API tools. Those tools include API management systems that work with OpenAPI and allow people who are outside Google to use the same infrastructure that my team uses to protect and manage Google APIs like the Maps and Mail APIs. And finally we have a proposal to Expand OpenAPI to include RPC APIs that we’re discussing with the OpenAPI community.

Not everyone at Google thinks of us as an API company, but APIs are the heart of all that we do. If you’re working with computers today, I think that’s true for you too.

(photo by author)