GraphQL: Thinking Beyond the Technology

GraphQL: Thinking Beyond the Technology

tl;dr Frontend developers need to start building their own servers, for the sake of both their end-users and themselves. GraphQL is a great way to do it.

The Shine blog is a safe space, so I feel comfortable sharing with you a deep, dark secret from my past: before I shifted to JavaScript, I was actually a Java developer. This means I have worked as both a backend and frontend developer. It also means I am uniquely qualified to share with you another dark industry secret: the things that backend developers say about frontend developers when they’re not around. Are you ready? Here we go:

  • Can’t those frontend developers just use what we’ve given them?
  • Can’t those frontend developers decide what they want, once and for all?
  • Really, how hard can it be to build a frontend? Don’t those frontend devs just fiddle with CSS and bump pixels around the screen?
  • What do those frontend developers actually do with all of their time?

But of course, let’s be honest: frontend developers also say things about backend developers when they’re not around. For example:

  • Why can’t those backend developers just give us what we actually want?
  • How can I be expected to know everything I’m going to need in advance?
  • Really, how hard can building a backend be? Doesn’t it just get data out of a database and send it to us?
  • What do those backend developers actually do with all their time?

There may or may not be backend developers reading this post, but even if there are, I probably wouldn’t have much to say to them. Because, having thought about this for a long time, I’ve reached the following conclusion: frontend developers need to build their own backends. Furthermore, I think that GraphQL is a great way for them to do it.

In this post I’m going to talk about how I came to this conclusion. I’m going to do it by explaining what I think the problem is, drilling down into the causes, and then outlining why I think GraphQL should be part of the solution. In doing so, you’ll see that this is about more than just technology. It’s about culture as well.

The Problem

When it comes to loading data from the server, our client-side code is doing too much. This leads to a deteriorating experience for our users because, as our apps grow and evolve, the time it takes to get the data they need also tends to increase. It also leads to an increasingly bad experience for us, the frontend developers, as our client-side codebases become larger and more convoluted.

I think there a number of forces that push our client-side codebases towards this bad state. These forces usually come into play whenever a frontend developer receives a new requirement and has the following realisation: I need a new piece of data from the backend. This is where everything begins.

You see, at this point, the frontend developer usually has two options available to them. They can either:

  • Call another endpoint to get the data, or
  • Get more data from an endpoint they’re already calling

Let’s walk through each of these scenarios to get a sense of how the consequences play out.

Calling another endpoint

This is probably the most popular option for the more introverted frontend developers amongst us, mostly because it means we don’t have to talk to any other humans, especially those dreaded backend developers. We know what we want, we poke around until we find the endpoint that gives it to us, then we add client code to call it.

Problem solved, right? Well no, because now we have another problem. Data fetching is almost always initiated by a user interacting with your app. The fetch could be a direct result of the interaction (for example, clicking a button) or indirect (for example, clicking a link that navigates to a new route). Either way, now when the user performs some action, our application makes two calls over the network instead of one.

Which might seem harmless enough, but what happens when a new requirement comes along later and a developer makes the decision to call another endpoint? And then again? If you compound this scenario a few times, an app can end up making a bunch of network calls in response to a single user interaction. We call this chattiness, and it’s bad for your user’s experience.

When we go down this path, business logic tends to accumulate on the client. This increases the size of the client, and increases the processing power it needs to do its work. It also makes the client-side code more complex, especially if the client is having to deal with a disparate set of endpoints, each of which returns data that requires different handling.

Ideally, you want your app to make at most one call over the network in response to some user interaction. Choosing to call another endpoint immediately moves you away from this ideal, and one step closer to a pit of UX failure.

Getting more data from an endpoint that you’re already calling

So let’s say that you’re actually quite an extroverted developer. You even have a friend who is a backend developer. So rather than just calling another endpoint to get the data you need, you go up to your friend and say: “Hey, I’ve got this new requirement that means I need this new piece of data. I’m already calling this particular endpoint. Could you just add this extra piece of data to the payload that it returns?”

“Sure!”, says your backend developer friend, and they go ahead and do it, and now you’re getting that extra piece of data you need, and you’ve built the new functionality, and you didn’t even have to make an extra call over the network. Well done you!

But if you repeat this process over and over again, it also becomes a problem, because endpoints start to return more and more stuff. Sometimes even different clients can end up calling the same endpoint, each with slightly different requirements. The result is endpoints that return a bunch of stuff, only some of which is of interest to each particular client. Yet all that data has to go over the network, then be parsed on the client side.

I call these fat payloads. You’ve probably seen these before; gigantic wads of JSON (or XML, if you’re really unlucky), that you have to trawl through to find what you’re actually looking for. Fat payloads are probably preferable to chattiness in an app, but it’s a pretty poor tradeoff.

Root Causes

Chattiness is a problem because of network latency. Network latency exists because, whilst the speed of light is fast, it’s not that fast. Even worse, it’s not increasing any time soon. That puts a pretty hard upper limit on how far we can reduce latency. This limit might not be apparent on your super-fast office network, but on a spotty mobile network, or in the developing world, it most certainly will be.

Fat payloads are a problem because of bandwidth, or lack thereof. Fortunately, there’s not such a hard ceiling on available bandwidth. Broadly speaking, the available throughput of our networks is increasing over time. However, day-to-day, it’s something that is very much beyond our control as frontend developers, as it can vary wildly depending on where (and who) our users are.

So at this point it wouldn’t be unreasonable for us to feel a bit stuck; bound by constraints that, at best, we have little control over, and at worst, are simply incontrovertible laws of nature. However, there’s a third thing that I think is holding us back, but nobody really seems to talk about it much. It’s relates to a culture that we’ve created amongst ourselves as developers that boxes us into a certain way of working.

Culture Clubs

Our apps have to run over an enormous, sometimes slow, often unreliable network, also known as the internet. Over time, we’ve fallen into the trap of using this physical network to partition developers into two distinct groups: frontend developers and backend developers. Furthermore, as our user interfaces grow larger and more complex, the gulf between these groups has widened.

It wasn’t always like this. Once, when the web was mostly about sending HTML from the server to the client, frontend developers kind of were backend developers. This most definitely isn’t the case anymore. Now, writing code that runs in the client can be a full-time job. You can be a full-time client-side JavaScript developer, or a full time iOS developer, or a full time Android developer. For a busy client-side developer, it can be tempting to abdicate responsibility for what the server does to “the backend people”.

But from a performance perspective, the client, network and server can’t be isolated from one another. They’re a package deal. Furthermore, as frontend developers, were the ones most accountable to the user, and we’re the ones who, by definition, have to deal with a frontend codebase. Consequently, we’re the ones who have to take responsibility for how our apps perform – not just on the client – but over the internet. And the only way for frontend developers to really take responsibility for that is for us to build our own servers.


“But Ben”, I hear you say, “there’s already a pattern for doing this. It’s been around for years! It’s the BFF pattern, which stands for backend-for-frontend!”

Yes, I’ve heard of that, although, because I have an eight-year-old daughter, when I hear the acronym BFF, this is actually the first thing that comes to my mind:

Best-Friends Forever

But that’s also kind of appropriate too, because with the backend-for-frontend pattern, your backend and frontend really are best friends forever! With the BFF pattern, we’re talking about building a server that satisfies the exact needs of a particular client. Even better, it’s often a server built by the frontend developers themselves.

So don’t get me wrong: I think backend-for-frontends are a fantastic step in the right direction. However, I also think they’re held back by something that holds back just about every other API server that operates over the internet at the moment. Yet it’s something that, up until recently, we’ve largely taken as a given.

We Need To Talk About REST

I’m just going to come right out and say it: as an architectural pattern for having highly interactive clients communicate with servers over a network like the internet, I don’t think that REST is the ideal anymore.

It was once, but it’s not anymore, especially in an increasingly mobile world. Furthermore – and this is an important point to make – whilst in theory it could still be the ideal, I think that in practice, it’s not. And for those who want me to wait longer and give it a chance, I’m sorry, but I’m not sure that I’m willing to wait. There are a couple of reasons for this.

Query Strings

The first is query strings. In REST, the primary mechanism for referencing resources is URLs. And for most intents and purposes, it comes down to the path and query string in the URL. Paths only let us reference one thing, so if we want any sort of additional flexibility, that leaves us with the query string.

Want to filter or sort data? You need to do it with the query string. Want to be able to opt-into requesting multiple, complex nested data structures? Devise a scheme for doing it with the query string. Want clients to be able to trigger a complex calculation on the server-side, without having to make multiple requests? Do it with the query string.

Now, in theory, REST being representational and all, rather than using a query string, you could specify a content type that indicates that not only do you want JSON, but that you want JSON that conforms to a particular schema. The server could then infer from that content-type the exact data that you actually want for that particular request, and return you that data and nothing more.

Have you ever done that? Me neither.

Response Structure

The second big reservation I have with REST is that it has no real opinions about the format of responses. Most of the time these days, we do it with JSON. Yet how that JSON is structured is usually an afterthought. Have you ever worked on a project where, at the end of the day, the only way that you could really know what the structure of a server response was going to be was to actually call the server, preferably in production? I have, many times.

But what about Swagger? In my experience, it’s great in theory, but in practice it’s a lot of work to keep an accurate and detailed Swagger spec up-to-date. Consequently, on most projects I have worked on that supposedly have a Swagger specification, when I need to really understand response structures, I usually still find myself in the same place: actually calling a production server.

I think the reason that all of this happens is that, at the end of the day, Swagger and the like are bolt-ons to REST. They’re not core to it, and they came after-the-fact. Furthermore, irrespective of how good they are, they’re built on a foundation that isn’t appropriate for use by complex clients operating over the internet. This is because REST uses discrete endpoints to identify resources. But our data is a graph, and if we want to easily and efficiently traverse that data over the internet, discrete endpoints are not a good way to do it.

Introducing GraphQL

So you’ve probably guessed where all this is leading: GraphQL.

For those who aren’t already familiar with it, GraphQL addresses head-on the issues of latency and bandwidth that creep up on complex clients. With GraphQL, clients can get all the data they need, in one hit. Furthermore, they only get what they ask for, and nothing more. Finally, GraphQL makes it easy for client developers to evolve their data needs over time. It achieves this properties via two primary mechanisms.

Query Language

The first big feature of GraphQL is the query language. It is both well-defined and lets you fetch data flexibly and efficiently.

Imagine, for example, that we have an app that needs to report on your customers and the orders that they have placed. Here’s a GraphQL query that fetches the first name and last name of each of your customers:

query { 
  customers { 

Note that we only specify what customer fields we want, and nothing more.

Queries can also take arguments. For example, here’s a query that gets the first name of each customer that has the last name “Teese”:

query {
customersWithLastName(lastName: “Teese") {

In this query we don’t bother asking to get the last name back in the results because we already know what it is.

Things really get interesting when we use queries to fetch related data. For example, here’s a query that gets the first name of each customer, as well as the dates of any orders that each customer has made:

query {
customers {
orders {

And finally, relationships can be traversed in both directions. So, in the following query, we get the first name and last name of each customer who submitted an order on a particular date:

query {
ordersForDate(date: "2018-11-11") {
customer {

In short, GraphQL queries are well structured, flexible, and they beat the heck out of using query strings.

Schema Language

The second big feature of GraphQL is the schema language. This is how we know how what sort of queries we can run against a server. So, for the example queries in the previous section, the GraphQL schema would look like this:

type Customer {
   id: ID
   firstName: String
   lastName: String
   orders: [Order]

type Order {
   id: ID
   date: String
   customer: Customer
type Query {
   customers: [Customer]
   customersWithLastName(lastName: String): [Customer]
   ordersForDate(date: String): [Order]

You can see that the schema defines the structure of the data and its relationships. It also defines the root-level queries that you can use as a starting point for navigating your data.

Another important property of GraphQL schemas is that they are typed. The GraphQL type system is reasonably expressive for an over-the-wire protocol, with support for everything from simple scalar types (numbers, strings, booleans), input types and enum types to more advanced features like custom scalar types (for example, dates), interfaces and polymorphism.

Importantly, GraphQL server libraries can (and do) enforce that the responses produced by a server match the schema for this server. Furthermore, queries, schemas and the type system are core to the GraphQL specification. If a server does not have a GraphQL schema, it’s not a GraphQL server.

So as you can probably guess, I’m a big fan of GraphQL. But so are lots of other people, and you may have already heard some of the technical benefits that I’ve just outlined. However, I think the benefits extend beyond the technical into the cultural. This realisation dawned on me shortly after I built my first GraphQL server.

It’s not that hard

It’s not actually that hard to build a GraphQL server. In fact, it’s so easy that…even a frontend developer could do it.

You see, the GraphQL reference implementation is built in JavaScript and runs on Node. Apollo Server, probably the most popular library for building GraphQL servers, is also built in JavaScript and runs on Node. And if you don’t feel like writing much code at all, there are even offerings like Amazon AppSync that let you wire a schema directly to AWS datasources.

So whilst you don’t have to write a GraphQL server using JavaScript (there are libraries for other languages available), if you’re already a frontend developer writing JavaScript, it’s a bit of a no-brainer to use JavaScript to write your server as well. Furthermore, I’ve found that once you understand the basic patterns for writing a GraphQL server, it’s reasonably straightforward.

In short, if you’re a frontend developer who accepts the premise that you need to write a server, but feel constrained by the clunkiness of REST, here’s my modest proposal: seriously consider using GraphQL to build your server instead.

An FAQ for nervous frontend developers

Of course, this suggestion requires frontend developers to change their mindsets a little. Now, we have to consider the GraphQL server to be an extension of our client code. This requires us to break the habit of equating frontend and backend with client and server:

and instead, start considering the server to be an extension of the frontend:

I gets lots of questions from slightly uneasy frontend developers about this. Here’s a couple of the most common ones:

Does this mean I have to own the whole stack?

No. A GraphQL server shouldn’t actually do that much. Its main job is to aggregate all of the sources of data for your frontend into a single schema that is ease and efficient for the client to query. In my experience, a GraphQL server is a great place to paper over all of the inevitable weirdness and hideous quirks of your particular backends. In short, a GraphQL server is a great place to bury your bodies. It’s a much better place to bury bodies than on the client-side.

Furthermore, it’s important to realise that GraphQL servers aren’t just limited to getting their data from upstream REST services. They can talk to just about any other source of data as well. This can include databases, or cloud services. Heck, on my current project, our GraphQL server even produces Kafka messages. That something you probably couldn’t/shouldn’t do on the client-side.

Should the same developer work on both the GraphQL server and the client-side?

Yes, preferably in the same pull request. I’ve found that after an initial learning period, most frontend devs are fine switching between the two, especially if they’re using JavaScript for everything. It’s just not that much of a cognitive jolt for them.

Only split work up between “client-side” and “server-side” developers if you absolutely have to. This is because, in doing so, you’re effectively splitting ownership of the end-user’s experience between these two groups. This causes problems because, to avoid having to wait for somebody else to do something, developers tend to cut corners.

Does this mean I still have to talk to backend developers?

Yes, I’m afraid so. We’re not magically making work go away here, we’re just shifting it from the client into the server, for the sake of our users and ourselves. You’re still going to have to write code that interacts with backend systems, which means you’re still going to have to talk to backend developers.

Some frontend developers get nervous that backend developers will be territorial about this, and not want to cede control of “their” backends. I haven’t experienced this myself. But if there is an issue, I’d suggest you don’t even call your GraphQL server a “backend”. Just call it the “server” or “frontend server” if you really have to.

Let’s wrap this up

The hard constraints of latency and bandwidth, combined with a developer culture that increasingly equates “frontend” and “backend” with “client” and “server”, has led us to a place that is bad for our users and bad for us. The only way out of this hole is for frontend developers to start building their own servers again.

Backend-for-frontend patterns are a great step in the right direction. However, we’re still held back by REST, whose limited expressiveness makes it a poor fit for complex clients talking to servers over the internet. With structured queries, typed schemas and first-class JavaScript support, GraphQL is a great alternative approach. The upside for end-users is faster, more responsive apps. The upside for frontend developers is much more manageable client-side codebases.

GraphQL empowers us, frontend developers, to take back some control of our destinies. We’ll have to learn something new, and I can’t guarantee that backend developers won’t still complain about us. However, I can guarantee that we’ll spend less time complaining about them, because we’ll have more control over the things that we really care about: our user’s experience, and the state of our codebases.

I'm a Senior Consultant at Shine Solutions.

1 Comment
  • Pingback:Introducing the TGRS stack for web interfaces - Shine Solutions Group
    Posted at 11:48h, 30 July Reply

    […] Whilst it’s possible to implement a BFF with REST, formal interface definitions like OpenAPI have to be bolted on separately, and are only really practical to use if you are using a language like Java. In contrast, it’s impossible for a GraphQL server not to have a GraphQL schema. To learn more, check out my post GraphQL: Thinking Beyond the Technology. […]

Leave a Reply

%d bloggers like this: