
06 May 2022 Controlling Serverless Web Application Traffic on AWS
On a recent project we were asked by a client if it would be possible to host a React app using serverless technologies, but also ensure that traffic never left their VPC and corporate network.
In this post I’m going to talk about how we achieved this outcome, and how it proved to be more of a challenge than we first thought it would be.
But Why?
The client wanted serverless because they didn’t want to have to manage servers. They certainly didn’t want to host something like Apache httpd on EC2 instances. Nor did they want to use an ECS or EKS cluster with a load balancer in the VPC. This is not unreasonable. Why manage infrastructure for hosting web servers and serving up static assets if there are other alternatives?
Normally, in such situations we propose to our clients that they go serverless by serving their assets via AWS CloudFront, and hosting the backend in API Gateway and Lambda (for more information on how we use these services as part of a broader suite of technologies for building web applications, see Ben Teese’s blog post on the TGRS stack).
But unfortunately, Cloudfront wasn’t going to work for this particular client, even if we protected the Cloudfront distribution with an AWS WAF whitelist. This is because using CloudFront meant that traffic would have to leave their corporate network, which violated their strict security controls and compliance requirements.
So what could we do instead?
Background
Before we go any further, I should first give you a little bit more information about the networking setup. The client had a well-segmented corporate network, comprising an on-premise network connected to a central services AWS account using Transit Gateway. The AWS account containing the application and its VPC were then peered to this shared service account, with the appropriate Route 53 DNS forwarding, and routing tables setup within the service account. Traffic from internal users generally flowed through this arrangement like this:

Note that there is no unfiltered access to the public internet, and absolutely no access back from the internet into the network.
Getting Started
To block access to our serverless backend from the public internet, we just had to make our API Gateway endpoint private. So with CloudFront off the table for the frontend, we wondered what would happen if we tried serving the static frontend files from S3 via a private API Gateway endpoint as well.
To do this, we configured API Gateway with the {proxy+}
keyword to pass through all subpaths to S3. It was fiddly, but worked in the end. So, problem solved?
Not quite. The one drawback of API Gateway is that endpoints have random and arbitrary URLs assigned to them. Whilst this isn’t a problem for backends, it’s not so good for frontends. It seems a little unreasonable to ask end-users to type a randomly-generated URL into their browser!
At first, fixing this problem seemed simple, as API Gateway has a custom domain feature. Unfortunately digging a little deeper into the documentation revealed that it’s not supported for private APIs:

So how else could we get a custom domain name? Our thought was to use an application load balancer or proxy within the VPC that could handle calls from the user to API Gateway. Unfortunately you can’t connect either a classic or application load balancer directly to API Gateway (probably for good reason). So we would need something to sit between the load balancer and API Gateway.
Enter VPC Endpoints
A VPC Endpoint is pretty much exactly what it sounds like: an endpoint bound to a VPC that can be used to access AWS services. It’s possible to set up a VPC Endpoint for a private API Gateway endpoint, so we did just that. We then pointed our load balancer to a target group containing the VPC Endpoint IP addresses. The end result looked a bit like this:

The only catch was that it still didn’t work! After a little bit of poking around we realised that, whilst our requests were indeed being forwarded through the load balancer to API Gateway, API Gateway had no idea who we were. Consequently it was just ignoring the requests.
To fix this we added a Route 53 entry for a user-friendly custom domain name, with a CNAME entry that pointed to the load balancer:

It worked! Traffic from users on the internal network would go through the transit gateway to the load balancer, then on to API Gateway via the VPC Endpoint. At that point the only thing left to do was add an API Gateway method that mapped /
to /index.html
in our S3 Bucket, emulating what a rewrite rule would do in a regular web server.
Limitations
There are a couple of limitations with this approach.
Firstly, API Gateway limits the size of files that can be returned through it to 10MB. However, for a web application this isn’t too much of an issue. Part of the job of any modern front-end build tool is to break a large application into chunks that can be passed over the network and interpreted by the browser without making the user wait for too long. These chunks are usually far smaller than 10Mb.
Secondly, it’s worth keeping in mind that you pay for API Gateway by the number of requests it receives. However, you’re usually paying per batch of one million requests. So unless you have thousands of internal users, the cost of serving up your front-end via API Gateway is still likely to be dwarfed by the cost of serving up your backend.
Thirdly, if you’re automating the setup of this stack using CloudFormation, you may have to create a custom resource to grab the IP addresses of the API Gateway VPC Endpoint so that your load balancer can be configured with them.
Finally, in theory the IP addresses of VPC Endpoints can potentially change over time, meaning that your Network Load Balancer configuration might suddenly break. In practice I haven’t seen this happen yet, and I’ve got VPC Endpoints that have been operating for over twelve months. However, depending on how critical your application is, you may wish to add an AWS Lambda function on a cron schedule to ensure the latest VPC Endpoint IP addresses are always part of the load balancer’s target group.
Conclusion
Is this a recommended pattern? Probably not. It’s certainly a lot more complicated than the typical Cloudfront approach that we’d use. However, if you have strict compliance needs, it’s an approach that still gets you many of the benefits of a serverless architecture, whilst keeping all your traffic within a particular network. If you’re interested in trying it yourself, here’s a cloud formation template that should help point you in the right direction.
Neil
Posted at 10:59h, 17 MayThis is the exact issue I’ve been trying to overcome!
I will have to see if I can get an API Gateway delpoyed