💾 Archived View for dioskouroi.xyz › thread › 24943985 captured on 2020-10-31 at 01:03:03. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

Application Load Balancers enables gRPC workloads with end to end HTTP/2 support

Author: weitzj

Score: 159

Comments: 138

Date: 2020-10-30 16:36:05

Web Link

________________________________________________________________________________

malkia wrote at 2020-10-30 18:17:36:

Used stubby at Google (mainly java), and was intimidated first, then saw the light - when almost everything uses the same way of talking, not only you get C++, java, python, go and other languages speaking freely to each other, but other extra benefits - for example each RPC can carry as a "tag" (key/value?) the user/group it came from, and this can be used for budgeting:

For example - your internal backend A, calls service B, which then calls some other service C - so it's easy to log that A called B, but the fact that C was called because of what A asked is not, though if propagated (through the tag) then C can report - well I was called by B, but that was on on A to pay.

Then dapper, their distributed tracing was helpful in the few times I had to deal with oncall (actually them asking me to do it). And in general, it felt like you don't have to write any low-level sockets code (which I love)

jeffbee wrote at 2020-10-30 18:35:12:

Unfortch, gRPC brings none of these things. If you want delegated caller, originator, trace ID, or any other kind of baggage propagated down your RPC call graph, you are doing it yourself with metadata injectors and extractors at every service boundary.

gen220 wrote at 2020-10-30 18:59:15:

Depending on your perspective, though, this can be seen as a positive thing: gRPC is extensible enough that all of this can be built on top.

I'm sure that in 10 years, there will be more concepts like "trace IDs" that we will consider minimally necessary for distributed service architectures, that don't exist today.

FWIW, writing the libs to do the metadata injection/extraction is pretty straightforward and transparent to application developers if they're done right.

llimllib wrote at 2020-10-30 19:12:05:

Here's my real-world grpc request ID propagation middleware in go, it's extremely simple:

https://gist.github.com/llimllib/d0840eaee14411a50201960615d...

this function gets called a couple hundred million times per week, and has never failed as far as I know

gen220 wrote at 2020-10-30 19:23:06:

Yep! The internal libs at our company look very very similar.

You can also do fun middleware things like rate limiting and ACLs, but I haven't seen those in the wild.

If somebody has links to examples those, please share. :)

jrockway wrote at 2020-10-30 21:10:32:

I wrote this authorization code last night:

https://github.com/jrockway/jsso2/blob/master/pkg/internalau...

Obviously it's quite natural to just add interceptors as you need them and no doubt there are hundreds of things like this across the Internet.

To some extent, I can't get over how much of a mess you can make by doing things like this. Because generated service definitions have a fixed schema (func (s _Service) Method(context.Context, Request) (Reply, error)), you have to resort to hacks like propagating the current user through the context, instead of the easy-to-read and easy-to-test alternative of just passing in the parameters explicitly, as in func (s _Service) Method(context.Context, types.Session, Request) (Reply, error). If I was going to spend time on infrastructure, that's the thing I'd fix first.

Some other frameworks do a little better here. We use GraphQL at work, and the methods are typically of the pattern:

          func AutogeneratedMethod(context.Context, ...) {
       foo := MustGetFoo(ctx)
       bar := MustGetBar(ctx)
       return ActualThingThatDoesTheWork(ctx, foo, bar, ...)
    }

This makes testing the Actual Thing That Does The Work easier, and the reader of that method knows exactly what sort of state the result depends on (the most important goal in my opinion).

jrockway wrote at 2020-10-30 20:31:04:

Yeah, I think you just need to invest a small amount of time to set up your clients and servers, and you reap the benefits for a long time.

I use

https://github.com/jrockway/opinionated-server

as the foundation of my server-side apps. I thought about monitoring, tracing, and logging once... and then get it for free in every subsequent project. I explode the app into my cluster, and traces are immediately available in Jaeger, I can read/analyze my logs, and it's all a pleasure to debug. I think everyone should just have their own thing like this, because it's definitely a pain to do it more than once, but a joy once you have it working.

(My thing specifically is missing some things I now need, though, like multiple loggers so you can turn on/off rpc tracing at runtime, support for auto-refreshing TLS certs from a volume mount now that everything is mTLS, etc. Will be added when I am less lazy ;)

While I'm here I'll also plug

https://github.com/jrockway/json-logs

for making the structured logs enjoyable to read. You can query them with jq, it understands all the popular JSON log formats, and I can't live without it. It's the only client-side app I've ever written that passes the "toothbrush test" -- I use it twice a day, everyday. Recommended ;) (Someday I will write docs and an automated release process.)

jeffbee wrote at 2020-10-30 19:36:09:

There's always two sides to extensibility. One the one hand, you have the opportunity to do it your way. On the other, you have to do it. The extreme of extensibility is always an empty file. You get to pick the language and the architecture and everything!

gravypod wrote at 2020-10-31 01:05:54:

I've started implementing some of this at my job. We have an internal proto that describes the config of a gRPC service. We then have a library for all of our languages that turns that proto into a Channel that's instrumented with everything from middlewares to low level socket settings (keep alive, idle timeouts, retries, hedging, etc). Makes our deployments super easy as well since every service is configured to talk to every other service in almost a 100% identical way.

malkia wrote at 2020-10-30 20:22:37:

I've only used grpc for C# talking to C++ and wanted to plug this in, saw other users posting about injectors - so gonna look into it :) - I really want to avoid yet another proxy/"service-mesh" process for something that could be done on the client side (I wish I could inject the http2 library to propagate this always with headers somehow, but it's not a single thing, also too low level for me - Tools/app developer pretty much)

random3 wrote at 2020-10-30 23:41:58:

jeffbee, actually, you do and it works well, you don't have to do more than use a library. You get it out of the box in GCP. It works with Opentelemetry / OpenCensus

Dapper was not embedded in Stubby either.

psnosignaluk wrote at 2020-10-30 17:51:55:

I'm just going to jump in with an utterly pointless "woohoo!" As a gRPC shop, this is going to open up a lot of options for both our own infra and make it easier to support clients on AWS. Now if only Azure would make it easy to implement solutions that leverage gRPC...

tchalla wrote at 2020-10-30 17:20:08:

AWS, ALB, gRPC - it seems one needs to swallow a thesaurus to communicate these days.

wongarsu wrote at 2020-10-30 17:26:12:

AWS is suffering from a TLA problem. gRPC on the other hand is a decent name, at least you can guess at a glance that is a RPC protocol. Meanwhile you just have to know that ALB is a type of ELB.

ForHackernews wrote at 2020-10-30 17:57:01:

...only if you know what "RPC" means!

lights0123 wrote at 2020-10-30 18:07:58:

Which is at least a very common thing across many fields, not a proprietary Amazon technology.

benreesman wrote at 2020-10-30 17:24:28:

I started a recent project with gRPC but wound up moving to fbthrift after having a bad time with the C++ async server story. Overall I’d like to be using gRPC because the fbthrift documentation is weak, but thread-per-request is a non-starter for some use cases. From the gRPC source it looks like they’ve got plans to do something about it but it seems a ways off.

apta wrote at 2020-10-31 04:47:24:

Are you bound to C++ for the implementation?

jeffbee wrote at 2020-10-30 17:46:06:

What specifically was your problem with grpc c++ async? I'm using async C++ with one CQ per core and it seems to more or less work.

benreesman wrote at 2020-10-30 18:04:25:

It was not obvious to me how to support a large number of methods without a bunch of error-prone boilerplate. It seemed very low-level, which is fine if you have a small number of interactions but it started getting out of hand quickly and fbthrift does this neatly out of the box.

diogenesjunior wrote at 2020-10-30 17:13:16:

Can somebody please explain to me why we would use gRPC?

coder543 wrote at 2020-10-30 17:31:56:

Things I "love" about REST APIs:

- How should I pass an argument? Let me count the many ways:

  1. Path parameters
    2. Query parameters in the URL
    3. Query parameters in the body of the request
    4. JSON/YAML/etc. in the body of the request
    5. Request headers (yes, people use these for API tokens, API versions, and other things sometimes)

- There's also the REST verb that is often super arbitrary. PUT vs POST vs PATCH... so many ways to do the same thing.

- HTTP response codes... so many numbers, so little meaning! There are so many ways to interpret a lot of these codes, and people often use 200 where they really "should" use 202... etc. Response codes other than 200 and 500 are effectively _never_ good enough by themselves, so then we come to the next part:

- HTTP responses. Do we put the response in the body using JSON, MessagePack, YAML, or which format do we standardize on? Response headers are used for... some things? Occasionally, responses like HTTP redirects will often just throw HTML into API responses where you're normally using JSON.

- Bonus round: HTTP servers will often support compressing the response, but _almost never_ do they allow compressing the request body, so if you're sending large requests frequently... well, oops.

I don't personally have experience with gRPC, but REST APIs can be a convoluted mess, and even standardizing internally only goes so far.

I like the promise of gRPC, where it handles all of the mundane transport details for me, and as a bonus... it will generate client implementations from a definition file, and stub out the server implementation for me... in whatever languages I want.

Why wouldn't you want that?

rswail wrote at 2020-10-31 03:57:13:

Because RPC is brittle. It requires both server and client to agree on the definition. It requires a shared IDL. If used as more than just the protobuf serialization format, it hides the _remote_ part of RPC and programmers forget about network failures. It requires a different naming service from the standard internet naming service (DNS) which is brittle across network boundaries.

If you want to avoid bikeshedding for REST APIs, adopt a standard like JSON API [1].

Adopt a specification standard like OpenAPI [2] and JSON Schema [3]. There is tooling to give you the client/server stubs like gRPC.

Implement using HTTP/2 with fallback to HTTP and you get compression and multiple channels per connection.

gRPC is a repeat of CORBA is a repeat of ONC-RPC is a repeat of... there's common reasons why RPC as a concept is brittle and tightly couples implementations of clients and servers.

[1]

https://jsonapi.org/

[2]

https://swagger.io/specification/

[3]

https://json-schema.org/

coder543 wrote at 2020-10-31 04:09:22:

> If used as more than just the protobuf serialization format, it hides the remote part of RPC and programmers forget about network failures.

This is like saying all programs should be written in C, because otherwise programmers forget about how memory allocation works and the cost involved.

Any good REST client implementation abstracts away the remoteness too, apart from occasionally returning a network error the caller. I’m definitely not sitting there twiddling bits to call a REST API.

> It requires both server and client to agree on the definition.

This is true of literally every API. If the client and the server disagree on a REST API call, nothing good is going to happen. Period.

gRPC is designed to allow evolution of APIs where clients and servers have different versions of the IDL. It’s no more brittle than a JSON API, and arguably it’s actually less brittle because of how the approach it takes.

> It requires a different naming service from the standard internet naming service (DNS) which is brittle across network boundaries.

That is unequivocally false.

It doesn’t use a special naming service to connect servers and clients, unless you specifically choose to override the default behavior, which you could also do with JSON API if you wanted to make it really brittle as well.

Instead just use DNS, which is the default for both.

It _absolutely_ doesn’t require a special naming service, as you claim.

gRPC is built on the HTTP/2 standard. Based on the rest of your comment, you clearly also didn’t know this.

> gRPC is a repeat of CORBA is a repeat of ONC-RPC is a repeat of... there's common reasons why RPC as a concept is brittle and tightly couples implementations of clients and servers.

Your information so far can be trivially disproven, so... apologies if I’m not going to take advice from you on this subject right now.

I’m glad JSON API works for you.

EDIT: I see you repeated a lot of this misinformation in yet another comment. I get it —- the very idea of RPC is offensive to you. But you should at least research the technology you’re ranting about. Your information about gRPC is entirely, factually wrong.

Guthur wrote at 2020-10-30 18:39:06:

Because all those questions are the easy ones.

gRPC is not a magic bullet for the problems of state management. It'll have all the same issues RPC has had for the last 30 years in that it enforces no discipline where it counts, state management.

The real problem with REST is that state management across a distributed system is hard, real hard. So hard actually that we decide to ignore that it's a problem at all and instead get obsessed with client side interface code, transport protocol, or content types etc, the easy mechanical stuff.

gRPC wont be a magic bullet, just like CORBA wasn't, or XML-RPC, or SOAP. History does like to repeat itself...

coder543 wrote at 2020-10-30 18:45:10:

I'm not saying they're hard questions. They're annoying, pointless questions that I have to answer every single time I create or consume a REST API. Those pointless questions also create very real bugs that I have dealt with for _years_, because humans make mistakes.

It's a complete waste of time and energy for everyone involved. Every one of these questions creates additional mental overhead and an opportunity for incorrect implementation. I would rather deal with meaningful decisions as often as possible, like questions of state management, instead of constantly deciding what color to paint the bike shed based on each person's interpretation of what "the most RESTy" API should look like.

You didn't seem to disagree with me in any way... I see nowhere in your comment where you say that REST APIs are better than gRPC, or why you would pick to make a REST API over gRPC or vice versa. Your rant just has nothing to do with my comment, as far as I can tell?

I never claimed that `gRPC` would solve state management. I never even mentioned state management.

So... cool? gRPC is not a magic bullet, I completely agree. It's a tool, and it seems to have advantages over REST APIs. That's what we're discussing here, since the OP asked why someone would use gRPC.

Your comment seems to imply that it's pointless to improve one problematic situation (RPC) without completely fixing all other problems in existence.

rswail wrote at 2020-10-31 04:17:38:

RPC (not just gRPC, but all its ancestors) as an architectural approach has the following problems that gRPC hasn't solved:

* It requires tight coupling between client and server

* It uses non standard naming and discovery which doesn't work across network boundaries, which is why SOAP and XML-RPC were invented, a way of channelling RPC over port 80/443.

* The problems of handling of state synchronization between client and server and the lack of standard failure and error modes.

* The lack of standards for caching of responses or middleware boxes for load balancing, rate limiting, deployment practices.

REST avoids these (and others) by:

* Using content negotiation and media types to communicate the format of requests and responses

* Using standard (DNS) naming and discovery network mechanisms

* Is a stateless protocol (between requests) that specifically expects that the client and server will exchange their states as part of each individual request. Being stateless, it accommodates network errors and the definitions of idempotency and limits on the number and types of verbs and reasonably strict definitions of their use also provide standard mechanisms for recovery.

* Specifically defines and accomodates naming, discovery, caching, chunking requests/replies, streaming, middleware cache and content distribution, network boundaries, security authentication and authorization etc.

Other than having an IDL that has tooling to generate stubs/clients in multiple languages, there are no distinct advantages of gRPC/protobuf over REST/HTTP, particularly in the general case of independent clients and servers from different organizations and user groups.

gRPC is a reasonable solution if your systems are able to be tightly coupled and deployed because they are either being developed as a monolith, or entirely within a common organization. If your network is reliable and not subject to interruption or partitioning between boundaries.

The entire "web services" saga of SOAP, WSDL, WS-* was an attempt 10-15 years ago to once again attempt RPC. So was RMI for Java. They failed for the same reasons.

People have been trying to "improve" RPC since the 80s, with numerous attempts and incarnations. They all suffer the same problems, which is that you cannot "wish away" the network by abstracting it out of existence.

The "annoying, pointless" questions of REST can be solved by not bikeshedding them each time, adopt JSON Schema, OpenAPI, JSON API and understand that REST is about the _nouns_, not the _verbs_. Limiting and strictly defining the operation of the verbs, which is what HTTP does, let's you focus on the nouns and how you want to "transfer the state" of the nouns between two endpoints. That's what REST is about.

Guthur wrote at 2020-10-30 20:10:13:

If you're working in a group think conformist organisation with a single mono repo like Goolge then gRPC certainly has some allure.

But if I'd like to remain a little more decoupled it's not very good at all.

REST is definitely not a magic bullet either because more often than not we fail to do that hard modelling aspect well enough.

tveita wrote at 2020-10-30 23:51:55:

> But if I'd like to remain a little more decoupled it's not very good at all.

It works quite well, IME. Each service publishes its protobuf files to a repository or registry during the build step, and if you want to call it from another service you just import the protobuf file and get a defined and documented interface with little to no boilerplate required to use. Protobuf has clear rules on how to evolve the interface in a backwards compatible way, so services can usually stay on the old definition until you need some new functionality, at which point you import the newest definitions.

https://github.com/uber-archive/idl

defines a good workflow for this, though the tool is sadly unmaintained. Done right it really reduces the pain of crossing repository boundaries.

rswail wrote at 2020-10-31 04:24:39:

It doesn't solve the problem that RPC leaves the definition of the verbs (ie the procedures) and how they modify state of a common thing undefined. If I call an RP twice, what is the effect? How do I know it succeeded? What happens if it fails? etc etc

None of these things can be communicated through an IDL definition.

HTTP solves this problem by strictly defining the operation of its verbs (HEAD/OPTIONS/GET/PUT/POST/DELETE/PATCH) in terms of idempotency, caching, identification etc.

Communicating the structure of the things that you are manipulating in REST over HTTP is done by defining the media types that you expect and respond with. Content identification and headers in content/connection negotiation define the versions and formats of the content of requests and responses.

coder543 wrote at 2020-10-30 20:25:51:

> But if I'd like to remain a little more decoupled it's not very good at all.

I don't really understand how gRPC makes this harder.

Either way, you have to call things correctly or they don't work. gRPC just naturally generates the client implementation for you, and it should always work. Swagger/OpenAPI theoretically help with this on the REST side of things, but it's up to the person publishing the Swagger as to whether it is actually 100% correct, or just a first order approximation.

But, I agree it's easier to have one protocol than two inside a company, so that would definitely be a downside of having both REST and gRPC in one organization.

deckard1 wrote at 2020-10-30 20:34:55:

> gRPC wont be a magic bullet, just like CORBA wasn't, or XML-RPC, or SOAP. History does like to repeat itself...

I will commend gRPC for being brave enough to attach "RPC" to its name in 2020. Can't say the same for that quisling GraphQL, which is neither what I would call a query language _nor_ has anything to do with graphs. A- for marketing effort, I suppose.

> it enforces no discipline where it counts, state management

A tale as old as time. If your redux app is a bloated confusing mess, then try scaling down your department from 100 devs to the 10 that it _actually_ needs. Most devs are bad at organization. Most devs are just bad in general. Ever see a bad developer grapple with TypeScript? I wager most codebases fall apart from disarray long before they reap any of the presumed benefits of most "best" practices. You can't fix social problems with technology. And code hygiene and state hygiene are fundamentally _social_ issues. People think tools like Prettier can come around and clean their house for them. Like some Roomba for code. Even the best Roomba will smear dog shit all over the place.

rhacker wrote at 2020-10-31 04:11:33:

I highly recommend you actually check out GraphQL. It definitely feels like it traverses relationships like a graph. It is more similar to a query language... like SQL actually because like SQL you can add more columns and more tables "similarly" and it really does join that data together.

It is actually a really good name. There are a lot of people that like to comment about that, but have never actually used it.

q3k wrote at 2020-10-30 18:48:08:

Sure, gRPC is no magic bullet that will solve all issues across your stack, but it is still a very solid tool/library. Using it instead of REST allows you quickly solve the easy problems (sending typed messages across the wire) while letting you focus on the hard parts (building distributed systems).

gravypod wrote at 2020-10-31 01:20:49:

gRPC is a great way to implement a RESTful API [0]. Instead of saying `POST /thing` or `POST /things` or actually `PUT /thing` and maybe `POST /thing/1` you can say:

      service ThingService {
      rpc CreateThing(CreateThingRequest) returns (Thing);
      rpc DeleteThing(DeleteThingRequest) returns (ThingDeleted); // No arguing about if this is within the HTTP spec and supported as it just works :)
      rpc UpdateThing(...) returns (Thing); 
      rpc ListThings(...) returns (stream Thing);
    }

[1] -

https://google.aip.dev/121

tootie wrote at 2020-10-31 01:51:31:

Protobuf, thrift, avro, cap'n proto just off the top of my head. There is no shortage of RPC protocols and implementations and none of them have very wide adoption.

kbar13 wrote at 2020-10-30 17:52:59:

Also writing bespoke REST clients for every single endpoint is just a huge waste of time and error prone since there’s so many ways of doing everything.

One of my biggest gripes with REST APIs is having identifiers in url path instead of in query params or the request body

spockz wrote at 2020-10-30 18:32:46:

Honestly, swagger and OpenApi have existed for ages. Who still creates bespoke rest clients? It took me one day to create a template for generating code that links our http client of choice to api specifications. These also exist out of the box for many clients like ribbon.

dragonwriter wrote at 2020-10-30 21:56:03:

> There's also the REST verb

HTTP verbs. REST is a protocol-neutral architectural style (of which HTTP is an implementation), it doesn't have verbs.

> that is often super arbitrary. PUT vs POST vs PATCH...

They aren't arbitrary, they have well-defined semantic differences. It's true that “REST” APIs built over HTTP often play fast and loose with HTTP semantics, but that's not a feature of REST so much as people understanding neither REST not HTTP.

> HTTP response codes... so many numbers, so little meaning!

HTTP Status Codes have very well defined meanings.

> Response codes other than 200 and 500 are effectively never good enough by themselves, so then we come to the next part

201, 202, 204, 3xx, 4xx, and 5xx are usually fine alone, though sometimes it's nice to augment 4xx/5xx with additional info.

200, on the other hand, usually needs more info unless it's being used in a case where 201/204 would be more specific.

rhacker wrote at 2020-10-31 03:50:31:

But I thought REST was a "protocol neutral architectural style", so why are we stuck with 200, 201, 202, 204, 3xx, 4xx, 5xx which "are usually fine alone".

sharx wrote at 2020-10-30 22:56:19:

Re: the verbs, gRPC is built on top of HTTP, and does not use the verbs (it's always POST). So I think it was fair to call them "REST verbs" in this context.

jayd16 wrote at 2020-10-30 19:45:58:

>- There's also the REST verb that is often super arbitrary. PUT vs POST vs PATCH... so many ways to do the same thing.

These have clearly defined caching and indempotency differences. They are not the same. I don't believe gRPC handles this or it looks like its experimental.

coder543 wrote at 2020-10-30 19:52:33:

> These have clearly defined caching and indempotency differences. They are not the same.

Clearly defined...? Maybe?

I don't know of any popular HTTP proxies that rely on these definitions to automatically decide how to cache things, because people in the real world follow the definitions very loosely. GET is the only one that I've seen relied on, and it's still just a good bet, not a guarantee. Usually, you have to define which routes are cacheable and carefully configure the cache settings... in my experience, at least.

Maybe it's just my bad luck to have encountered all the ways these verbs _aren't_ used consistently over the years. They're a _hint_ towards what will happen... at best. In my experience, anything goes, no matter what the verb says.

I truly hope you've had a better experience, and that it's just me.

> I don't believe gRPC handles this or it looks like its experimental.

Which seems fine. We can't rely on HTTP verbs for state management because of how inconsistent implementations are, so I don't expect gRPC to do that either, but at least gRPC won't make you choose a random verb.

dragonwriter wrote at 2020-10-31 03:12:49:

> Maybe it's just my bad luck to have encountered all the ways these verbs aren't used consistently over the years.

No, everyone has done that, especially everyone who's encountered almost any non-REST protocol over HTTP, which almost always ignore HTTP semantics (if you're lucky, they tunnel everything over POST.)

But whether _other people_ use the consistently in their APIs is a very different issue than the claim that they are insufficiently clearly defined so that deciding what _you_ should use in implementing an API that respects HTTP semantics (as REST over HTTP should).

coder543 wrote at 2020-10-31 03:39:53:

Sure, I guess that's fair.

Consider that a route might start as an idempotent way to update a RESTful object, but then requirements change over time and that method call now has non-idempotent side effects, such as updating a counter, or sending an email notification. It may not be practical within this system to determine whether the object being PUT is truly identical to the state already in the system, given the high volume of API calls, or the distributed nature of the state. At that point, everyone sits at the table to discuss what color to paint the bike shed. Should we change the verb, breaking existing clients? Should we require two separate API calls in order to separate the additional behavior from the idempotent simplicity of the original PUT request, doubling our API request load and introducing a possible error wherein a client forgets to make (or errors out while making) the second API call? Oh, and by the way, all of the existing clients won't benefit from the new, desirable behavior.

Neither of those options sound great to the stakeholders, so then you end up with a non-idempotent PUT, through no fault of the original API design.

The verbs quickly lose their meaning, and it would be better to spend that time actually considering how the API should evolve instead of worrying about what verb is associated with it.

You're obviously entitled to your own opinion. I fully admit that I could be wrong in all of this, but this is how I currently feel.

My experiences with HTTP have convinced me that the verbs are an abstract idea at best -- and because of that, we would all be better off eliminating PUT and PATCH. POST can do everything that PUT and PATCH can do. PATCH isn't idempotent to begin with, and you can't rely on the PUT verb to really indicate that a route is truly idempotent and you can just retry it arbitrarily, unless the documentation says so... in which case, POST can also declare that it is idempotent. (which, yes, does sound kind of weird, but I've also seen that.)

gRPC does away with the verbs entirely, as far as the developer is concerned, and that seems good to me. When I'm using a library, the functions aren't labeled with POST, PATCH, etc. The relevant behaviors and guarantees are spelled out in the documentation. I would imagine gRPC is a lot like that. But, as I said in the beginning, I don't have any direct experience with gRPC... just a lot of gripes with the way that HTTP REST APIs work, and some optimism that gRPC would let people focus on the actual problems at play, instead of lots of random distractions. (The verbs were only one of several such distractions.)

stephenr wrote at 2020-10-31 02:54:39:

I don’t know of any proxy in the wild that will cache anything other than GET or HEAD requests by default.

jayd16 wrote at 2020-10-31 03:03:36:

>GET is the only one that I've seen relied on

You're so quick to throw away a vastly used caching mechanism as if it's nothing.

coder543 wrote at 2020-10-31 03:08:07:

GET was never the verb under discussion. I specifically listed PATCH, PUT, and POST as being effectively meaningless in practice. You can’t rely on APIs to do what the verbs say they will do.

I only called out GET to say that it is still only a good bet that it will do what it is supposed to do. It’s absolutely not guaranteed. You’ve never encountered GET web routes that mutate backend state? People rely on it —- but that doesn’t make it reliable.

“Throwing away” verbs like GET is not the same as throwing out caching. Please don’t twist my words into what they were not. In practice, you often still need to specify exactly which routes to cache, and how to cache them. Just using GET doesn’t magically make things cached, and you can just as easily cache non-GET routes once you’re specifying what to cache.

Caching can be done for anything. It isn’t some special feature of HTTP or REST APIs.

awinder wrote at 2020-10-30 19:20:03:

"HTTP servers will often support compressing the response, but almost never do they allow compressing the request body"

What doesn't have great support for Content-Encoding headers?

coder543 wrote at 2020-10-30 19:23:03:

What does have great support? Content-Encoding works great from server -> client, but not the other way around, in my experience. That's the problem I'm discussing. It's theoretically possible to support.

See this stackoverflow question for one brief discussion:

https://stackoverflow.com/q/20507007

I've looked around for this feature and rarely found it to be supported... but it's also rarely discussed.

stephenr wrote at 2020-10-31 02:59:58:

the first answer to that question links to the Apache docs, it’s a one line change to add support, possibly with an additional block around it to limit scope to certain urls.

If built in support that works with one config statement isn’t “great” support, what is?

coder543 wrote at 2020-10-31 03:04:55:

It’s not great if you want to use someone else’s API that way to save on egress bandwidth costs... that’s what. They don’t usually let me edit their Apache configs.

Apache being the only thing to support it also isn’t great. They specifically mentioned uncertainty around nginx. Have you tested any of your APIs to see if they support it?

hagsh wrote at 2020-10-30 18:07:12:

I work in a startup which is ~10 months old where we've decided to go all in on gRPC for all communications, both inter-service and client (Web SPA and a CLI) to service.

Although investment in tooling had been significant in the beginning it has truly paid off its dividends now as we can develop in Golang (micro services, CLI), Javascript (SPA) and Python (end to end testing framework), and have a single definition for all our API endpoints and messages in the form of Protobufs. These protobufs automatically generate all client and server code and give us out-of-the-box backward and forward compatibility, increased performance due to binary format over the wire and more..

Our Architect which put together most of this infrastructure has written an entire series of blog posts about how we use gRPC in practice, detailing our decisions and tooling:

https://stackpulse.com/blog/tech-blog/grpc-in-practice-intro...

https://stackpulse.com/blog/tech-blog/grpc-in-practice-direc...

https://stackpulse.com/blog/tech-blog/grpc-web-using-grpc-in...

balfirevic wrote at 2020-10-31 02:15:48:

My, oh my! From the first article:

"About 2 months ago I started working at StackPulse. When I joined there was _not a single line of code written_ yet. We’ve had to decide on many different things. One of those things was choosing a “protocol” for communication both _between our micro services_ and externally".

q3k wrote at 2020-10-30 17:18:56:

Sure. You would use gRPC when:

  - You want to have inter-service RPC.
    - You want not only unary calls, but also bidirectional streaming.
    - You want a well-defined schema for your RPC data and methods.
    - You want a cross-language solution that guarantees interoperability (no more JSON parsing differences! [1])

[1] -

http://seriot.ch/parsing_json.php

jwalton wrote at 2020-10-30 20:48:17:

In other words, when you want to use ASN.1[0], but you don't know what ASN.1 is so you use something the overly complicated version Google made instead. ;)

[0](

https://en.wikipedia.org/wiki/Asn.1

)

wahern wrote at 2020-10-30 21:47:53:

To be fair, in practice people don't choose gRPC as a protocol and serialization standard so much as they choose preexisting gRPC libraries and code generators. Open source ASN.1 tooling sucks while Google maintains gRPC tooling for a great many languages. This is why gRPC, Thrift, etc have so much more mindshare than ASN.1 in the open source community. The only good open source ASN.1 (DER, PER, etc) code generator for C is asn1c. I believe Java has a couple of good libraries. But other than that ASN.1 tooling is a horrible train wreck of broken or abandoned projects that don't provide adequate ASN.1 tooling. Many people rely on OpenSSL's DER library, but its far too low-level. AFAICT, the same is true for Python and other high-level languages--no open source projects where can you pass an ASN.1 specification to generate (at compile-time or run-time) a full serializer and deserializer.

There are plenty of commercial, proprietary ASN.1 tooling solutions out there. Presumably it's why ASN.1 has persisted as long as it has in industry. Even Fabrice Bellard sells a commercial solution:

https://bellard.org/ffasn1/

When you have access to good tooling ASN.1 is arguably superior to the open source alternatives as there aren't as many broken corner cases that can cause interoperability problems.

We only have ourselves to blame. If I ever have time I want to write an ASN.1 spec parser using LPeg that can generate LPeg-based DER parsers. I already have a fairly comprehensive LPeg-based DER parser for PKIX messages, but generating that from an ASN.1 spec is a significant step up in conceptual complexity. While I've written more than my fair share of parsers before, I've never written a proper parser _generator_, so it's a steeper hill to climb for me even though I already have more ASN.1 experience than most.

jwalton wrote at 2020-10-31 01:34:09:

I gather the protobuf/gRPC implementations are quite good for a lot of languages, but I can tell you typescript doesn't seem to be one of them. There's an etcd client for node which doesn't seem to be actively maintained, and I only needed a few things out of etcd, so I figured I'd just generate a gRPC client and build my own library [0] to do what I needed. This was not fun. I got all kinds of crazy errors about missing definitions, and I ended up having to copy paste .proto files from a bunch of random google projects into my repo to make this all work. Maybe I'm just doing it completely wrong? :P

[0](

https://github.com/jwalton/etcd3-ts

)

gravypod wrote at 2020-10-31 01:29:26:

Do you have any code samples that demonstrate the code generator, instrumentation, and observability tools for ASN.1?

How do I code gen clients/servers for Java, Python, Golang, C++, Node, PHP, etc. How can I instrument distributed tracing for requests? How do I talk to these services from my web frontend (grpc web equivalent)? How do I talk to these from my embedded system (proto lite equivalent)?

q3k wrote at 2020-10-30 22:10:25:

How is Protobuf 'overly complicated'? I'm not talking about gRPC here - because trying to compare bare ASN.1 and gRPC is just dishonest.

This is especially rich when you're comparing protobuf to ASN.1 - which is probably mostly known for having dozens of competing, obscurely-named encoding formats to choose from. And for them being so complex to implement that it regularly causes bugs and security issues in software...

sagarm wrote at 2020-10-30 21:44:36:

This seems like a very high level spec, not a library or toolkit. There's also zero mention of practical concerns like protocol evolution, a canonical wire encoding etc.

jwalton wrote at 2020-10-31 01:37:19:

ASN.1 is indeed a spec. You'd need to find a library to use it, and there are more in C than in whatever fancy modern language you're probably using. But, there are lots to choose from.

There's multiple canonical wire encodings. XER is the "XML encoding rules" if you want something human readable. "BER" is the binary format, although it has some ambiguities so there's "DER", the Distinguished Encoding Rules, which resolves a lot of that by using essentially a subset of BER and specifying how you should behave in various corner cases. In practice, you want to write your messages as DER, but accept messages from other parties using the full BER.

It's an older standard, but it's used heavily in telecom. To pick an example, if you make a cell phone call, especially out in the sticks somewhere, there's probably going to be a media gateway controller that figures out how to route your call without decompressing and recompressing it a bunch of times, and it talks to the various devices routing your call over H.248, which is specified entirely in ASN.1.

wnevets wrote at 2020-10-30 17:48:50:

>Parsing JSON is a Minefield

and there are people who complain it's too simple, i.e no comments

q3k wrote at 2020-10-30 17:51:18:

It's one of those 'simple at first glace' standards. If you want to confuse someone who advocates for JSON's 'simplicity' as a feature, ask them what happens when their favorite JSON deserializer receives repeated dictionary keys (yes, that's valid JSON per RFC 7159).

deckard1 wrote at 2020-10-30 19:50:43:

there is no standard so simple that humans won't make a mess of it. You think "CSV, comma-separated-values, it's right in the name!" And then you write a parser for it and realize it won't read CSV files Excel generates.

Years ago I was porting an r5rs Scheme app from Guile to other Scheme systems. The entire standard is 50 pages of fairly readable basic English. You would not believe the amount of differences different interpreter/compilers have on their agreement as to what a symbol can be. Or under-specified cases such as integer->char (one system goes out of its way to be a dick about it and purposely _not_ use ASCII, trading pragmatics for pedantry)

jeffbee wrote at 2020-10-30 20:11:34:

        % echo '{"a": 5, "a": 42}' | jq  
  {
    "a": 42
  }

Pretty much what I expected and also what you would get if you saw two instances of the same optional non-repeated field in a protocol buffer message. What were you expecting or wanting?

q3k wrote at 2020-10-30 20:29:48:

Here's the kicker: while this might sensible to you, some JSON implementations out there will reject duplicate fields (fail a parse completely), and the RFC does not even specify what is the correct behavior (override previous, ignore duplicates, fail entire parse, return non-unique keyed dictionary, something else entirely?).

So while to you and I this behavior might be expected (although I'm still not sure that overriding previous fields is more obvious than ignoring repeated fields) - some library implements thought differently, and there isn't even an agreed on standard. Arguing about this isn't purely academic, either - there have been security vulnerabilities resulting from these differences [1].

[1] -

https://justi.cz/security/2017/11/14/couchdb-rce-npm.html

jeffbee wrote at 2020-10-30 20:54:16:

Interesting. Protobuf specifies this last-instance-wins behavior, and it can be pretty useful. It allows you to override a field by simply appending a few bytes, without having to re-encode a whole message. JSON I guess doesn't have as much concern for efficiency as protobuf has.

Groxx wrote at 2020-10-30 20:25:01:

Their point was likely that implementations vary on the interpretation. Which is a bit of a problem for rpc systems.

sudhirj wrote at 2020-10-30 17:37:57:

If you're communicating between two systems, gRPC has a few benefits:

Much better developer experience and performance writing HTTP services and code to call them.

Cons are

q3k wrote at 2020-10-30 17:47:01:

> The auth story isn't very clear. Do you use the out of band setup or add tokens in every single RPC?

I think it's actually quite well documented. [1]

You can have out-of-band authentication data per-connection ('per-channel') and per-RPC ('per-call'). SSL certificates can be used per-connection, while token-based authentication (eg. bearer tokens, or anything else that can fit in metadata) can be either.

[1]

https://grpc.io/docs/guides/auth/

coder543 wrote at 2020-10-30 18:19:36:

If you want the list items to render on separate lines, you need to either add an extra newline between list items, or indent the list items by 4 spaces.

weitzj wrote at 2020-10-30 17:18:34:

It makes it really nice to define APIs (like with Openapi of swagger). There is a bunch of code generators out there to produce code for your definitions to have a native swift , objective , Java, Go api stubs for either clients or servers.

It is a joy to work with in cross functional teams and define your APIs whilst taking into account what Api versioning would mean, how to define enums, how to rename field names whilst being compatible with the transport protocol and other things.

Also if you were to route a payload from service A via B to C and each service is deployed independently and gets new Api changes, gRPC supports you in how to handle ther Szenarios.

Sure enough openapi can do all of this I guess but grpc definitions in Protobuf or Google artman are just way quicker to understand and work with. (At least for me)

fulafel wrote at 2020-10-30 17:54:42:

Not familiar with gRPC, questions: how does the tooling compare to HTTP? Browser devtools lets you look what's on the wire, replay requests with slight alterations for debugging, have timelines and visualizations for the history of communication, extract and send self contained scriptlets (like you can do with curl) to someone else, etc. Which of these have some equivalents in generally available gRPC tooling?

weitzj wrote at 2020-10-30 18:09:27:

There is also Charlesproxy which supports Protobuf.

But from my experience you use the code generator and trust the deserializer and serializer since they are unit tested.

So you can just unit test your methods and don’t have to look at the actual binary blob payload.

You trust that gRPC is battle tested and you can just test your code.

You would probably wrap the generated methods/objects/structs in your own domain model and unit test the mapping between them.

Using the objects from gRPC throughout your code directly does work but sometimes is not what you want to work with.

So I rather would introduce another boundary to the transport. But that is personal preference (in case I want to get rid of gRPC and don’t want to touch my business logic)

thinkharderdev wrote at 2020-10-30 18:04:57:

In general, not nearly as mature. In general though, gRPC is not for browser->server calls (grpc-web notwithstanding) but is designed for server<->server communication.

There is some tooling out there for development (

https://github.com/fullstorydev/grpcurl

and

https://github.com/fullstorydev/grpcui

are pretty nice) but it's still much less mature than the massive amount of mature tooling available for HTTP-based services. And that is both an artifact of gRPCs relative youth compared to REST and also for some more fundamental reasons (binary wire format, mutual TLS based authentication, etc).

All that said, I've been working with gRPC over the past 6 months or so and overall the development experience is much nicer on net I think.

q3k wrote at 2020-10-30 18:06:00:

There's grpcurl and other similar tools for when you just want to run a simple gRPC request against a server. If you server runs the reflection service, it will also let you inspect the schema of whatever is running on a given endpoint.

For in-browser use with gRPCweb, if you use test-proto-on-XHR, things continue to work as with REST/JSON.

For inter-server debugging, you usually defer to opentracing or similar, and capture request data there.

k__ wrote at 2020-10-30 17:22:37:

I see, cool.

So the spec already includes versioning?

q3k wrote at 2020-10-30 17:25:36:

No, gRPC/protobuf instead provides you with ways to evolve your schema easily in the IDL and the result on the wire, without breaking either side.

You can rename fields (but keep the tag number and therefore wire format compatibility), add fields (which will be ignored by the other side), remove fields (as all are explicitly optional so every consumer explicitly checks for their presence anyway), ignore unset fields (as the wire encoding is to a certain-degree self-describing), etc.

gravypod wrote at 2020-10-31 01:36:26:

Another important part about forward-and-backward-compat is that protos support passing unknown fields. If I add a new field into a shared proto that A, B, and C all use if A and C have been updated but B was never updated as long as B uses the proto correctly it will have the new field delivered to C.

I use this at my current job where our client is a hardware appliance that we are not at all allowed to update so, if we need to add new data for our backend to handle that the client downloads locally we can and we don't need to worry about pushing new client code to do it.

This is magic for anyone who has been using Retrofit or something similar and sees fields dropping as normal.

weitzj wrote at 2020-10-30 17:35:20:

Also context propagation is part of gRPC which supports you in thinking about tracing, request cancellation, deadlines so that you actually have a chance to employ SLOs

gravypod wrote at 2020-10-31 01:16:09:

My main reasons:

1. It's standardized all the implementation for each language is roughly similar and has the same feature sets (middlewares, stubs, retry, hedging, timeouts, deadlines, etc).

2. High performance framework/webserver in "every" language. No more "should I use flask or the built in http server or gunicorn or waitress or..."

3. Tooling can be built around schemas as code. There's a great talk that I highly recommend about some of the magic you can do [0].

4. Protos are great for serialization and not just over a network! Need to store configs or data in a range of formats (json, prototxt, binary, yaml)?

5. Streaming. It's truely amazing and can dramatically reduce latency if you need to do a lot of collection-of-things or as-soon-as-x processing.

6. Lower resource usage. Encoding/decoding protos is faster then encoding and decoding json. At high throughput that begins to matter.

7. Linting & standards can be defined and enforced programatically [1]

8. Pressures people to document things. You comment your .c or .java code, why not comment your .proto?

[0] -

https://youtu.be/j6ow-UemzBc?t=435

[1] -

https://google.aip.dev/

didip wrote at 2020-10-30 17:22:46:

The biggest benefit is when your company supports multiple programming languages. All RPC calls can still stay uniform regardless language backend.

jruroc wrote at 2020-10-30 17:18:18:

Serialization/deserialization speed and reducing transfer size are good reasons for large throughput service-to-service communication. Also a decent ecosystem around code generation from .proto files and gateways to still support some level of JSON-based calls.

wbl wrote at 2020-10-30 18:13:33:

When you don't want to deal with the portmapper anymore and think distributed reference counting is a bad idea.

Seriously binary RPC has been around for ages.

jwatte wrote at 2020-10-30 21:33:11:

HTTP is super great for loosely coupled, request-based services.

RPC is more lightweight for persistent connections to stateful services. RPC makes broadcast easier than HTTP. Individual RPC requests have (much) less overhead than HTTP requests, which is very helpful when tight coupling is acceptable.

Trying to run, say, MMO gaming servers over HTTP is an exercise in always paying double for everything you want to do. (Also, trying to run a FPS gaming server over TCP instead of UDP is equally not the right choice!)

corytheboyd wrote at 2020-10-30 21:46:12:

Maybe I’m crazy, but here is something I have been toying with recently. I have defined services in protobuff and generated static typescript definitions for the services and associated messages. I then implemented my own flavor of RPC over a WebSocket connection, where RPC calls are implemented as two calls— a “start” call from client to server, and a “result” call from server to client. It’s interesting and I don’t know if I would go this far down to the “metal” if you will on a team, but for my own project it’s been interesting.

foota wrote at 2020-10-30 22:58:10:

I'd be curious, how do you handle associating rpc responses from the server with the request call site? Some sort of id?

corytheboyd wrote at 2020-10-31 03:52:30:

Yeah, request IDs that are used to invoke deferred callback functions

lvice wrote at 2020-10-30 17:06:43:

I find interesting the discussion regarding gRPC support for Azure App Service, and the amount of moving parts involved to achieve such support...

https://github.com/dotnet/aspnetcore/issues/9020#issuecommen...

muststopmyths wrote at 2020-10-30 17:34:10:

Really illustrates the dumbassery of sticking a (relatively) fast-moving application-layer protocol into the kernel. Now you can't update the Web Server without updating the operating system.

Might have been handy to beat benchmarks back in the day when people liked to whip them out for comparison, but IIS is under 10% according to Netcraft now. Time to fold up the tent and go home.

I suppose .Net Core is sticking with Http.sys to avoid implementing their own web server, but is tying yourself to the Windows shipping cycle worth it ?

halter73 wrote at 2020-10-31 02:28:25:

The default ASP.NET Core server is Kestrel which runs in-process and is cross platform. Kestrel has officially supported gRPC for over a year now[1]. That linked GitHub comment is specifically about IIS gRPC support which relies on HTTP.sys (which runs in kernel mode and is tied to the Windows shipping cycle as noted).

[1]:

https://docs.microsoft.com/en-us/aspnet/core/grpc/?view=aspn...

tybit wrote at 2020-10-31 03:14:35:

They’ve already folded up shop and solved these problems with Kestrel.

Them continuing to offer support for laggards using IIS shouldn’t be criticised though.

ComputerGuru wrote at 2020-10-30 19:32:31:

> Really illustrates the dumbassery of sticking a (relatively) fast-moving application-layer protocol into the kernel.

Really? There are lots of problems with doing HTTP in the kernel, but “fast moving” is a new one. HTTP was stuck in permafrost from 1995 to 2015. If it weren’t for Google, neither of HTTP 2 or 3 would have ever happened.

fbru02 wrote at 2020-10-30 18:20:27:

When should I use XMPP and and RPC ?

forty wrote at 2020-10-30 20:00:36:

I don't know gRPC: why does it need special load balancer support?

wmf wrote at 2020-10-30 20:19:00:

gRPC requires HTTP/2 while many load balancers only support HTTP/1 on the backend.

ainiriand wrote at 2020-10-30 20:32:38:

I'm in the process process of advocating gRPC to my company that is starting to lay down the foundations to scale up. This presentation comes in handy.

k__ wrote at 2020-10-30 17:10:48:

Half-OT:

What's the main use-case for gRPC?

I had the impression RPC was seen as a mistake.

Sure, gRPC also uses a binary protocol, but that doesn't seem like a USP of gRPC. Why didn't they went fron non-RPC binary?

Serious question! It sounds a bit counterinuitive to me at the first glance.

ublaze wrote at 2020-10-30 17:30:31:

gRPC is one of the best decisions we've made at our company. Here's the longer blog post -

https://dropbox.tech/infrastructure/courier-dropbox-migratio...

, but some things:

1. Performance

2. It's hard to make changes that are backwards incompatible via protobuf (reduces significant source of bugs)

3. Great, standardized observability for every service. Small services don't really need too many custom metrics, since we log a LOT of metrics at the RPC layer

4. Standardization at the RPC layer lets us build useful generic infrastructure - like a load testing framework (where users only need to specify the RPC service, method, and parameters like concurrency, RPS).

e12e wrote at 2020-10-30 23:29:31:

> I had the impression RPC was seen as a mistake.

For hypermedia/hypertext Fielding made a solid argument for preferring REST over RPC (mostly because of caching). I still recommend reading his very approachable thesis - these days _not_ so much for the web/REST architecture, but for the other ones, which include modern SPAs (they're _not_ great as hypermedia apps, but fine as networked applications):

https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm

Apart from caching (and cache invalidation) it's accepted that making the network invisible is a bad idea - it will lead to issues. So remote procedure calls aren't inherently bad, but blurring the distinction too much between local procedure calls and remote ones aren't a great idea. From hung nfs mounts to unpredictable application performance, it is unpleasant.

This Netflix talk on their not-graphql framework gives a very nice summary of when and how you might prefer RPC to REST:

https://youtu.be/hOE6nVVr14c

q3k wrote at 2020-10-30 17:16:05:

It's a generic RPC protocol based on a well-enough-typed serialization format (protobuf) that is battle-tested. You'd use it where you'd use REST/API/JSONRPC/...

Compared to plain JSON/REST RPC, it has all the advantages of protobuf over JSON (ie. strong typing, client/server code generation, API evolution, etc), but also provides some niceties at the transport layer: bidirectional streaming, high quality TCP connection multiplexing, ...

mradmin wrote at 2020-10-30 17:35:28:

> You'd use it where you'd use REST/API/JSONRPC/

Not really. You'd use it for inter service communication but can't really use it in the browser (see grpc-web)

hagsh wrote at 2020-10-30 17:48:02:

At my workplace we use gRPC for inter service and client service communication from a VueJS SPA. It took some effort but is working really great right now. A colleague wrote a blog post (actually entire series) about it:

https://stackpulse.com/blog/tech-blog/grpc-web-using-grpc-in...

k__ wrote at 2020-10-30 18:06:10:

Does gRPC bring fexibility and discoverability like GraphQL?

malkia wrote at 2020-10-30 18:21:35:

You can enable reflection API (but must compile your server with it) - e.g. it exposes some well known end-point, which when asked can return you back the other end-points available (services), and then asking each end-point service can return each method - and then for each method - what it takes (assuming as "proto") and returns, also what kind is (one shot, bidi, etc.)

So not the same as GraphQL, as you are not asking about resources/objects, but rather inspecting what can be called and how.

hagsh wrote at 2020-10-30 18:20:44:

I have not used GraphQL in practice so I can't directly compare them.

What I can say is that in terms of flexibility, protobufs by nature provide us with forward and backward compatibility. We can easily add new fields and eventually deprecate old ones, evolving the API similarly to GraphQL.

Apparently, gRPC also has some introspection capabilities like GraphQL (since again you have a clear protobuf schema) but I have never used them in my clients, and perhaps they are not as baked into the system as in GraphQL.

k__ wrote at 2020-10-30 17:18:59:

Thanks for the explanation!

Why wouldn't it be enough to use REST with a protobuf media type?

q3k wrote at 2020-10-30 17:21:58:

There is some support for that [1]. I prefer to use native binary gRPC because:

1) REST verb/code mapping is too arbitrary for my taste, I prefer explicit verbs in RPC method names and error codes explicitly designed for RPC [2] and ones that will not be accidentally thrown by your HTTP proxy thereby confusing your client

2) REST stream multiplexing over a minimum amount of TCP connections is difficult to do, I trust the gRPC authors to have done their homework better than the average HTTP library. In addition, you can multiplex multiple gRPC bidirectional streams over a single TCP connection, which is something you can't do over plain HTTP without resorting to websockets.

[1] -

https://cloud.google.com/endpoints/docs/grpc/transcoding

[2] -

https://github.com/grpc/grpc/blob/master/doc/statuscodes.md

k__ wrote at 2020-10-30 17:26:07:

Good points.

Thanks again!

dodobirdlord wrote at 2020-10-30 17:31:04:

REST is a poor model for many scenarios, most obviously when the client and server aren’t dealing with resources or aren’t trying to maintain a shared model of state. A distributed database consensus protocol is a good example of the former, and an application server streaming metrics to a metric aggregation server is a good example of the latter.

closeparen wrote at 2020-10-30 17:32:59:

gRPC's HTTP2 transport is basically this, out of the box. You don't need to manually manage routes, handlers, headers, status codes, etc.

thinkharderdev wrote at 2020-10-30 18:10:51:

In many ways comparing REST and gRPC is apples-to-oranges. You can design a gRPC to work according to REST principles, and it is actually generally encourages to do so

And more to the point, the vast majority of "REST APIs" I've experienced in the wild are just RPC-style APIs that use JSON.

mnd999 wrote at 2020-10-30 18:18:05:

RPC is apparently en vogue again. Everything new is old.

It’s a pretty decent implementation of the pattern though. Efficient binary protocol (unlike SOAP), built in security and none one of the complexity of object request brokers. Although you might actually want that and you’ll likely end up with something complex like Istio.

wongarsu wrote at 2020-10-30 17:22:31:

> I had the impression RPC was seen as a mistake.

Aren't REST and webhooks just RPC protocols too?

dragonwriter wrote at 2020-10-30 17:33:57:

> Aren't REST and webhooks just RPC protocols too?

REST is not, but the thing that isn't REST that lots of people call REST is basically just RPC-over-HTTP-with-JSON.

k__ wrote at 2020-10-30 17:28:16:

Are they?

I thought the difference is, that RPC hides behind a function that looks like it would behave like it was local, but in fact does a remote call and REST explicitly states that things happen remote.

dragonwriter wrote at 2020-10-30 17:31:28:

Not only is that not a difference, neither one of those things is true.

Any remote interface tends to “hide behind” a local function, that's just how structured programming (of which most more advanced paradigms are refinements) works. And _Remote_ Procedure Call is fairly express that things happen remotely.

k__ wrote at 2020-10-30 17:35:27:

Interesting. That's how I learned it, haha.

Thanks for the clarification!

greenshackle2 wrote at 2020-10-30 21:21:58:

REST is centered on resources (nouns), RPC is centered on procedures (verbs). REST is more constrained.

A Remote Procedure is just that, a procedure. Procedures don't have many constraints. They can implicitly change the state of resources on the server. They can do whatever you want.

REST APIs are supposed to be designed around state transfer. You transfer the state of a resource from server to client with a GET. You transfer the state back to the server with a POST/PUT. The operations are supposed to be 'stateless' in that the result is not supposed to depend on the pre-existing state of the resource on the server.

To give a silly example, let's say I have a Counter service. In RPC I could expose a incrementByOne procedure. And then clients could just call:

      incrementByOne(id=1)

In REST I would have a Counter resource. The RESTful way to increment the counter would be:

      GET /api/counter/1
    -> OK {'id': 1, 'value': 12}

    PUT /api/counter/1 {'id': 1, 'value': 13}
    -> OK

It's more cumbersome, but notice that unlike the RPC call, the result of the PUT request doesn't depend on the current state in the server. The counter will always end up at 13. The PUT request is idempotent, I can repeat it n times and end up with the same result. Obviously that's not true with the RPC call. Notice also that the client must implement its own logic for incrementing.

You could design a RESTful RPC, where the only methods are like:

      getCounter(id) -> Counter

    createCounter(Counter) -> id

    putCounter(Counter)

The opposite, RPC over REST, doesn't really work. I guess you could try representing procedures as resources but it would be incredibly awkward. That's why I say REST is more constrained.

With well designed REST you should end up with very decoupled logic between server and client since all they can do is transfer state, they each have they wholly separate logic to deal with the state.

With RPC you can end up with some real spaghetti, where the logic of client and server are intertwined. But not everything can be modeled cleanly as resources, sometimes you really do just want to execute some logic on the server.

jeffbee wrote at 2020-10-30 17:54:38:

gRPC is not necessarily binary. It is often conflated with protobuf but it is in fact payload format agnostic. You can run it with JSON payloads if you want.

snipewheelcelly wrote at 2020-10-30 17:14:07:

it's use case is companies that want to use SOAP but don't want to say they use SOAP

outworlder wrote at 2020-10-30 17:41:43:

How is it related to SOAP in any way?

bubersson wrote at 2020-10-30 17:56:44:

I'm not OP, but the main parallel is a well defined schema of communication between the services using different underlying technologies.

SOAP is in my experience really hard to use and get right, compared to protobufs that bring well understandable set of primitives and intuitive support in many languages. gRPC is a solid carrier for protobufs. Yes, gRPC has many cons (e.g. with undefined/nil values, etc.), but overall it has worked great for our usecases.

thinkharderdev wrote at 2020-10-30 18:07:32:

Yeah, one way I've describe gRPC to colleagues (which may help or hurt depending on the perspective) is that is "SOAP, but without all the lunacy"

bwarren2 wrote at 2020-10-30 20:46:42:

Does anyone have a favorite intro/guide/book on gRPC? I have been wanting to learn for a while.

weitzj wrote at 2020-10-30 20:59:32:

grpc.io is great to start learning.

Also the blog posts on grpc.io are interesting, but I find them harder to discover whilst reading the documentation. But here they are:

https://grpc.io/blog/

Grasping the concept of a context/deadlines is quite helpful:

https://grpc.io/blog/deadlines/

You could also find related information in the Google SRE Handbook (Service Level Objectives):

https://landing.google.com/sre/sre-book/chapters/service-lev...

If you are familiar with Go, the article about "Context" might also be helpful:

https://blog.golang.org/context

But in any case, gRPC is language agnostic and has nothing to do with Go.

To get an idea how to create an api-repository with protobuf defintions to be shared by multiple services/clients, one can look at:

https://github.com/googleapis/googleapis

Saser wrote at 2020-10-30 22:03:09:

In addition to these, I think that Google's API Design Guide (

https://cloud.google.com/apis/design

) and their AIPs (

https://aip.dev

) are good references for learning about how their style of APIs, called resource-oriented APIs, can be designed. There is a linter that can check whether an API follows the AIPs (I know, these acronyms are easy to mix up), available at

https://linter.aip.dev

. I am building a side project following the AIPs and have found them to be very helpful.

Disclaimer: I work at Google, although I would have recommended these resources anyway.

bwarren2 wrote at 2020-10-30 21:09:14:

Thank you!!

weitzj wrote at 2020-10-30 17:20:49:

So the only remaining question is:

When does AWS roll out quic support in ALBs?

mrkurt wrote at 2020-10-30 18:42:08:

(Disclaimer: I work on Fly.io, this post is bias)

It will probably be a while. We've been evaluating Quic and the ecosystem just isn't quite ready. We opted to release UDP support instead, so apps that want Quic can do it, but we can avoid adding much extra plumbing in front of the simple HTTP apps.

Given how much AWS is investing in Rust, they'll probably ship first class support for Quic when Hyper does (same as us!):

https://github.com/hyperium/hyper/issues/2078

calcifer wrote at 2020-10-30 18:01:14:

If it's anything like HTTP/2, in 4-5 years.

sneak wrote at 2020-10-30 17:51:10:

This thread seems as good a place as any to ask:

Does anyone have experience (good, bad, otherwise) using the gRPC JSON transcoding option for real-world stuff? I'm debating using it (still need REST clients sometimes) but I'm not sure how hacky it is.

mariojv wrote at 2020-10-31 03:17:25:

This is only somewhat related, but I've used Go's protojson lib for pretty printing protobuf encoded data:

https://godoc.org/google.golang.org/protobuf/encoding/protoj...

They say not to rely on the output being stable, so I would recommend guaranteeing a stable translation yourself for a REST client. You can achieve this by translating from the JSON to your proto or grpc service structure yourself.

et1337 wrote at 2020-10-30 17:57:12:

We use it. It's pretty good. It has a lot of places you can hook in extra functionality. You get most of the HTTP error status codes for free, but we also have a filter that looks at outgoing protobuf messages for a certain field that indicates the messages is a response to a create request, and that allows us to return an HTTP 202 instead of 200. We were even able to do Amazon-style request signing. One thing about request signing is that if you use protobuf key-value maps, the order is not deterministic on the wire. This broke our signing. Key-value maps are kind of a protobuf hack anyway, so we ended up using an array of structs. When it came time to add the JSON gateway, we found it pretty easy to write custom JSON serialization/deserialization code to convert the structs to a JSON map. This is all in Go by the way.

human_error wrote at 2020-10-31 03:53:50:

We only use Python but we let Envoy to do the transcoding between gRPC and JSON. No issues.

weitzj wrote at 2020-10-30 17:59:18:

The grpc Gateway in Go worked quite well for us.

I have not tried the native Envoy decoding functionality, yet.

Also you should look at Google Artman on github/googleapis as sometimes it felt that defining the REST mappings in Protobuf were lacking some features.

Using google artman you kind of mix/match Protobuf with yaml definitions of your service.

We never had to use it, though. It just depends on where you want to put your authentication information.

As of today I would probably change my mind and make it explicitly in payload, I.e. Protobuf message and not fiddle with headers any more.

booi wrote at 2020-10-30 17:52:41:

It probably varies by the language and library but for java it has been flawless. I wouldn’t expect many issues for any major language.