
30 Jul 2021 Introducing the TGRS stack for web interfaces
TGRS stands for TypeScript, GraphQL, React and serverless. Over the last couple of years we have successfully built a number of enterprise single-page applications (SPAs) using this stack of technologies, as they complement each other well. In this post I’ll talk about what our motivations have been for choosing the technologies in this stack, the specifics of how we use each technology to get the most out of it, which of the technologies are mandatory, and which can be substituted for other things.
Motivations
The TGRS stack came about because of three problems we saw whilst working on existing SPAs.
The first problem was out-of-control JavaScript. In our experience, a large JavaScript codebase is kind of like a very small child. To its parents, it is a thing of beauty, makes perfect sense, and behaves like an angel. To everybody else, it is highly unpredictable, impossible to understand, and generally smells pretty bad. To make matters worse, as a JavaScript project grows, the situation deteriorates instead of improving (in contrast to small children).
The second problem was out-of-control data-loading code. The goal of a SPA is to provide a nice, unified experience to the user, whilst behind the scenes dealing with a motley collection of back-end REST services, each with its own quirks and specific ways of doing things. However, contacting all of these REST services directly from the browser is not a good idea. The internet is not a particularly fast or reliable network. High latencies mean that you should avoid situations where a single user interaction will require multiple network calls over-the-wire. Low bandwidth means as little code as possible should be sent to the browser at startup, and as little data as possible should be sent with each subsequent request.
The final problem we’ve seen with SPAs is infrastructure overkill. Many times when we have asked our local infrastructure expert how best to host a modest single-page web applications, the proposed solution has involved something like Nginx running in a Docker container, running inside ECS, fronted by ELB. This seems excessive for serving up a set of static assets. For front-end developers, the best infrastructure is as little infrastructure as possible. Furthermore, we’d rather own it ourselves rather than wait weeks for somebody to set all of it up for us, then have to reach out to an infrastructure expert whenever there’s a problem.
Stack Overview
The TGRS stack aims to address some of the problems we’ve encountered with SPAs. TypeScript helps keep a JavaScript codebase manageable as it grows. GraphQL gives you a strongly-typed protocol for communicating between the client and a single server. React provides a fantastic set of primitives for building a user interface. Finally, serverless technologies make it easy to set up the bare-minimum infrastructure necessary to serve up your SPA.
Before we continue, let us be clear that when we say “serverless”, we mean the general concept of serverless technologies, not the Serverless framework. Also, for the remainder of this post we’ll use AWS services when describing the serverless components of the TGRS stack. This is because AWS is the cloud platform that we’ve generally built these web applications with. However, if you’re on another platform, you can substitute in the appropriate alternatives.
The following diagram summarises how the components of the TGRS stack fit together:

You can see that first, the user fetches a React application (which we’ll just refer to as the “client”) from an AWS CloudFront distribution and runs it in their browser. When the client needs to fetch or receive data, it uses GraphQL to communicate with a server. The GraphQL server is running inside an AWS Lambda function, fronted by a single AWS API Gateway endpoint. The server acts as an aggregating layer to a set of upstream services. These upstream services generally have REST interfaces, although they could also be federated GraphQL servers.
This architecture has three key properties. Firstly, the user interface is considered to comprise both the client and the server. Secondly, it uses TypeScript everywhere. Finally, serverless technologies are used to host both the client and the server. We’ll now dive into each part of the stack to see how they all fit together.
TypeScript
TypeScript is the core of the TGRS architecture. This is because it helps JavaScript codebases stay manageable as they become bigger. Returning to my earlier analogy concerning small children, TypeScript is like a nanny that brings some discipline and order to your child’s life as they grow up.
That said, to get the most out of TypeScript it’s important to enable a couple of key flags when using it: noImplicitAny
and strictNullChecks
.
noImplicitAny
forces developers to be explicit about when they want to bypass the type system, instead of everything being untyped by default. The only circumstance under which I could imagine you wouldn’t use it is if you’re porting an existing JavaScript project to TypeScript. Otherwise, if you’re not using noImplicitAny
, there’s not much point in using TypeScript at all.
strictNullChecks
forces developers to explicitly deal with the possibility of null
or undefined
values at compile time, rather than waiting until runtime. It’s difficult to overstate how much of a positive impact this can have on your code. Entire classes of defect disappear. That said, it’s really important to switch this on from the start of your project, as retrofitting it later can be difficult.
Whilst these flags might appear to be minor command-line options, they steer the developer towards writing code that is fundamentally more reliable than it might have been without them. Initially, this can be frustrating for unaccustomed developers, as it requires them to think more in advance about what they’re doing. However, this investment pays off very quickly in the form of fewer runtime defects and code that is easier for other developers to reason about.
GraphQL
GraphQL makes it easy for clients to efficiently and flexibly get the data that they need from a single server. It also provides an excellent foundation for pushing as much logic as possible from the client-side of a user interface to the server-side, a pattern also known as Backend-for-Frontend (BFF).
Whilst it’s possible to implement a BFF with REST, formal interface definitions like Swagger and OpenAPI have to be bolted on separately, and are only really practical to use if you are using a language like Java. In contrast, it’s impossible for a GraphQL server not to have a GraphQL schema. To learn more, check out my post GraphQL: Thinking Beyond the Technology.
The TGRS architecture recommends that you write your own GraphQL server rather than using hosted services like AppSync. This is because hosted services tend to let the implementation drive the schema, rather than the other way around. In our experience building enterprise apps with GraphQL, implementing a schema that is most useful to the client inevitably requires writing custom resolvers in the server. But with AppSync, for example, you have to write custom resolvers using Velocity templates, which are hard to develop, debug and test. Naturally, people want to avoid hard things, so they avoid schema designs that will require custom resolvers. The end result is a sub-optimal GraphQL schema that, for example, requires the client to make multiple calls to the server in response to a single user interaction. This defeats the point of using GraphQL in the first place.
The good news is that it’s easy to write your own GraphQL server in TypeScript using a framework like Apollo Server. This also makes it easy for the same developer to work on both the client and server-side of a new feature, an important practice when using the BFF pattern.
Apollo Server comes with shims for running the server in different environments. This means you can, for example, run it in both an AWS Lambda function, which is great for production releases, or an Express server, which is better for day-to-day development and testing. For more information on doing this, see Building a Portable Apollo Server.
GraphQL’s typed nature also meshes well with TypeScript. Code generators like Apollo CLI and GraphQL Code Generator can consume GraphQL schemas and automatically produce TypeScript types for both GraphQL clients and servers. This helps avoid many of the runtime defects that would result from inadvertent mismatches between the client and server.
That said, it’s worth noting that all of the new concepts and technologies around GraphQL mean that it can take time for newcomers to learn the ropes. Consequently, whilst we highly recommend GraphQL in the TGRS stack, if you need to get something up-and-running quickly, it’s not a deal-breaker if you temporarily go directly from the client to an upstream REST service. However, as soon as your app starts to grow and evolve, we suggest introducing GraphQL as your over-the-wire protocol.
React (with hooks)
TGRS recommends React because of its simple but powerful component model. It also has a functional-programming mindset that focusses on minimising state and side-effects, two of the biggest causes of bugs in user interfaces. Furthermore, because both React and GraphQL have come out of Facebook, the two technologies share common philosophies and tend to work together well. Finally, whilst React’s TypeScript typings aren’t perfect, they’re good enough for most use-cases.
React hooks mean code for managing state and effects is very concise and composable. For the vast majority of cases, hooks are a simpler option than using class-based React components. Consequently, we strongly recommend you use them. Our TGRS projects contain many hundreds of hook-based functional components, but only one or two class-based components (usually Error Boundaries).
It’s also worth noting that if you use hooks for managing UI state and a GraphQL client library like Apollo Client for querying and mutating server-side state, then the need for state-management frameworks like Redux drops off. Redux stores generally have two sections: entities
, which is a normalised store of data that is used by the application, and ui
, which contains UI state and whose structure roughly mirrors that of your component hierarchy. Apollo Client has its own normalised cache, effectively removing the need for the entities
section of a Redux store. React’s useState
hook makes it trivial to store UI state in the component hierarchy itself rather than in a separate structure, eliminating the need for a ui
section in a Redux store. Storing this state in the component hierarchy also has the benefit of avoiding memory-leaks caused by stale UI state hanging around in a Redux store.
Whilst React is TGRS’s preferred web framework, its un-opinionated nature also mean that early in your project, you have to make important, informed decisions about how you are going to use it. TGRS has a clear opinion about how to do state management with React, but you will still, for example, have to make decisions about how you structure your project’s files, what build tooling you use, which router you use, and how you approach error handling. Furthermore, you’ll have to enforce these standards as your team grows. Consequently, we wouldn’t object too strongly if you chose to use an alternative like Angular, Vue.js or Ember.js if it suited your team better.
Serverless
TGRS recommends you use serverless cloud technologies to host your client and server. This is because serverless technologies are relatively straightforward for non-experts to set up, have pay-per-use pricing models, and require minimal ongoing maintenance. That said, if you are working in an environment that prohibits or restricts serverless technologies, it’s not a problem if you have to use more traditional infrastructure (although it is more work).
AWS CloudFront
We recommend distributing your client using a CDN like AWS CloudFront. Often this suggestion is met with the response: “But we don’t need the performance of a CDN!”. However, performance is not the point here. It’s usually easier and cheaper to distribute a single-page web application’s static assets with a CDN than it is to setup and manage your own web servers, containers and load balancers.
The only thing to keep in mind is security. Because your distribution will be on a public network, it will have a name that can be accessed by anybody in the world. This means that if you only want users on a private network to be able to access the application, then you’ll have to restrict access via an AWS WAF IP whitelist. Similarly, if you only want particular users to be able to use the application, you’ll have to introduce some sort of authentication and authorisation mechanism to the client.
AWS Lambda (w/ API Gateway)
We’ve had great success hosting our GraphQL server inside an AWS Lambda function, bypassing the need for application servers, containers, load balancers, and all of their associated configuration. Furthermore, because a GraphQL server only needs to expose one HTTP endpoint, the AWS API Gateway configuration is very straightforward.
As mentioned above, Apollo Server can be easily run in a lambda function. The only thing that it can’t do in a lambda function is support GraphQL subscriptions. This is because subscriptions are inherently persistent, whereas lambdas are transient. If you find that you really need subscriptions, you’ll have to embed it in a more persistent server, like Express. However, so far we have found using polling queries against lambda functions to be sufficient for meeting our near-real-time data synchronisation requirements.
Note that, to be accessed by clients distributed via AWS CloudFront, your lambda function must also be accessible via a public name. This means that, like your CloudFront distribution, if you want to restrict access to only those on a private network, you’ll have to do it via IP whitelists. Similarly, If you want to restrict access to certain users, you must send access tokens from the client to the server with every request, and have them validated before the server processes that request.
Finally, it’s worth noting that, whilst lambdas can be extremely cheap to run, they are best suited to applications that are only subject to low-to-medium loads. This makes them ideal for in-house enterprise applications, which often have a relatively low number of concurrent users. If you are building a consumer-facing application with a potentially high number of concurrent users, you should allow extra time to test that your lambda function will meet your performance requirements without becoming prohibitively expensive. If it does become too expensive, you may need to introduce caching infrastructure – for example, a CDN. Alternately, if the lambda just can’t scale up fast enough (irrespective of the expense), you should consider features like Provisioned Concurrency.
The Sample Project
To demonstrate how the various technologies in the TGRS stack can fit together, we have created a sample project. In addition to Apollo Client and Apollo Server, it also uses:
- Create React App to package up the client and run it locally
- The AWS SAM CLI to emulate running the GraphQL server in a lambda function locally
- Cypress to do a simple browser-based integration test
- Jest for unit testing
- React Testing Library for component testing
- Apollo Server Express to run the GraphQL server locally during development and testing
- Yarn 1 Workspaces to manage the code for the client, server and integration test, as well as anything that is shared between them
- The Apollo CLI and GraphQL Code Generator to generate TypeScript types for both the client and server
Note that these additional technology choices are not considered to be mandatory parts of the TGRS architecture. However, we have used them all successfully on production TGRS projects.
Conclusion
We’ve found that the combination of TypeScript, GraphQL, React and serverless technologies provides a great balance between code-quality and day-to-day agility when building a single-page web application. Once the foundation is set up, it’s easy to add new functionality, whilst still keeping the codebase manageable. Furthermore, for apps receiving only low-to-medium loads (which is often the case with enterprise applications), running everything on serverless infrastructure keeps costs very low. In short, we think that these technologies will let you be productive both right now, and as your app grows in the future.
No Comments