AWS for Games Blog

Guest post: How Space Ape Games delivers secure WebApps using AWS

We invited Space Ape Games Lead DevOps Engineer Louis McCormack to write a guest blog. Learn how Space Ape Games secure the front and backend of a private React application using Amazon CloudFront, AWS Amplify, AWS Lambda@Edge, and Amazon API Gateway. The frontend part is heavily inspired by this post.

About the author

Louis McCormack is Lead DevOps Engineer at Space Ape Games, a UK game studio acquired by Supercell in 2017 who share the same values for agile, empowered teams. To date Space Ape Games have launched four games: Samurai Siege, Rival Kingdoms, Transformers: Earth Wars, and Fastlane: Road to Revenge.

_____________________________________________

Intranets, remember those? Those isolated hubs of information from where you could print a leave form, or find out that Ron from Accounts had retired. You probably still have one. Indeed we all do, if under a different guise: every organisation will need to fence off a portion of cyberspace to communicate ideas and information, or provide tooling to its employees.

In the cloud age, authentication is the new firewall. Of course we can place resources inside our VPCs. But then we need to maintain a client VPN. What if we could risk hosting the resources away from our VPC, and put more faith in our authentication strategies? In recent years this has been made far easier for us, with the introduction of products like Amazon Cognito and Amazon Amplify.

At Space Ape Games, our internal tools are mainly delivered as Single Page Applications (SPAs). These mimic how a mobile app might behave: a static asset is delivered up-front, in the shape of some JavaScript (a React application in our case), which then calls into a backend API. Increasingly our backend APIs are serverless applications.

Securing backend APIs is relatively simple. We own the code, and we can enforce authentication appropriately. However, securely delivering the frontend code is somewhat more challenging…

The problem

Our approach to delivering frontend applications has until recently been to have a small Amazon Elastic Compute Cloud (EC2) instance (or container) serving JavaScript from within our VPC. This works fine, but leads us to manage more infrastructure than we’d ideally like. We have several games each comprised of multiple environments across several AWS accounts. That amounts to a decent chunk of mostly-idle compute, turning money into carbon emissions. As well as the need for a fairly complex client VPN setup.

We wanted to see if we could serve these files using Amazon Simple Storage Service (S3), protected with industry-grade authentication. Furthermore, we wanted to use the same authentication strategy to commune with our backend APIs.

This is the solution we pieced together:

I’m afraid this is going to take a fair bit of explaining, we’ll start by running through the theory and end up with a working example. You might want to grab a cup of tea…

The accompanying code for this article can be found in Space Ape Games’ Github repo, here.

First there was Amplify

AWS Amplify is a JavaScript library that vastly simplifies the integration of frontend code with a catalogue of AWS services. Relevantly, with a sprinkling of magic, it allows us to seamlessly hook our React applications up to an Amazon Cognito User Pool.

What is an Amazon Cognito User Pool? Good question. User Pools can be thought of as directories of registered users. They have a huge number of customisable features and can handle user registration, authentication and account recovery.

AWS Amplify really does make it ludicrously easy to authenticate against a User Pool. For example, this React application will refuse to render until the user has proven who they are:

import React from 'react';
import { withAuthenticator } from 'aws-amplify-react';
import Amplify from 'aws-amplify';Amplify.configure({
    Auth: {
        region: 'us-east-1',
        userPoolId: ‘YourUserPoolId',
        userPoolWebClientId: ‘YourWebClientId',
    }
});function App() {
    return (
        <div>
            <header className="condescending-message">
                <p>You have Authenticated. Well done You!</p> 
            </header>
        </div>
     );
}export default withAuthenticator(App);

The magic lies in the warm embrace of the withAuthenticator higher-order-component. If you attempted to visit a site serving the above code you would be told quite bluntly to log in:

The entire user sign-up flow is handled for us (including email/phone verification if the User Pool is so configured) and is customisable, so you can make it fit with your corporate message.

This is pretty awesome, but there is a problem.

This approach is fine for applications that are intended to be public, but we are trying to serve a private application. We have this barrier, but the user has still downloaded our entire application. They are free to inspect the JavaScript and infer details about our business logic and the API endpoints they could attempt to attack, if they were so inclined.

That said, we unavoidably need at least some portion of our code to be downloadable before authentication in order to provide a login screen.

Enter code splitting

Code splitting is a technique designed to postpone the loading of certain components until they are required. It is useful in reducing the initial load time of a web page.

What is convenient for us about this technique is that it results in separate ‘chunks’ of JavaScript, each representing different React components. We are able to specify an alternative path for some of these chunks, and can apply different authentication profiles to each path.

In React, there is a package — react-loadable — that makes this extremely easy to achieve. For instance, our React code might now look something like this:

import React from 'react';
...
import Loadable from "react-loadable";...const LoadableProtected = Loadable({
    loader: () => import(/* webpackChunkName: "protected/a" */ "./components/Protected")
});function App() {
  return (
    <div className="App">
      <BrowserRouter>
        <Switch>
          <Route path="/login" exact render={() => <Login/>}/>
          <Route path="/protected" exact component={withAuthenticator(LoadableProtected)}/>
        </Switch>
     </BrowserRouter>
    </div>
  );
}export default App;

(This presupposes the existence of 2 other components, <Login/> and <Protected/>, both of which can be found in the accompanying code.)

Some points:

  • <Protected/> is code-splitted using the Loadable function. It will only be loaded when the /protected path is requested.
  • The webpackChunkName comment specifies that the code for the component should reside at the path /protected/a.
  • The whole component is wrapped in withAuthenticator to ensure that it is indeed authenticated.

Once compiled, we will have 2 separate files:

  1. The Protected code (under /static/protected/a) and
  2.  The Login code, which essentially just redirects unauthenticated requests to /protected, where they will be caught by withAuthenticator and fed through the login process.

At this point we can upload our files to Amazon S3, slap on some Amazon CloudFront, and point our browsers at it.

But we’d still be able to freely access all of the code, even/protected. We needed a way to require authentication.

Enter Lambda@Edge

AWS Lambda@Edge allows us to attach small Lambda functions to Amazon CloudFront distributions, which will be invoked at different junctures of the HTTP request cycle. In our case, we need to have the function invoked when a user requests an object (viewer-request) and it needs to implement the following logic:

  • If the request is not for a resource under /protected, allow it through.
  • Otherwise, ensure that it is a properly authenticated request.

First we need to identify what constitutes a properly authenticated request, and that requires some digging into the Amplify library.

Remember the login flow from above, the magical one that appears when you invoke the withAuthenticator? Well, once a user has signed up and logged in, a set of User Pool tokens — which take the form of Json Web Tokens (JWTs) — are returned to the browser. We can instruct Amplify to store them in a customised cookieStorage, like so:

Amplify.configure({
    Auth: {
        region: 'us-east-1',
        userPoolId: ‘YourUserPoolId',
        userPoolWebClientId: ‘YourWebClientId’,
        cookieStorage: {
            domain: ‘YourDomain.com',
            path: '/',
            expires: 30, //cookie expiration in days
            secure: true
        },
   }
});

This will result in those tokens being sent as a cookie in every request to YourDomain.com (which in reality would be a CNAME to the CloudFront distribution).

A properly authenticated request then, is one that has an accessToken cookie, the value of which is a JWT that has been issued by the correct User Pool. This is what our Lambda@Edge function will need to ascertain.

The Lambda@Edge function we will use to do this can be found here.

What about the backend?

If you’re still following along, congratulations! (do you want a job?)

Quick recap:

  • Our React application is split into protected and non-protected resources and served through CloudFront.
  • A Lambda@Edge function will allow access to the protected resources only if the correct cookie is sent with the request.

So, if a user has managed to download the protected portion of our application, we can be assured that they are authenticated and are (hopefully, phishing attacks notwithstanding) part of our organisation.

We now need to permit that user to actually use our application by granting them access to our backend APIs. Recall, we wanted to use the same set of JWTs to access our backend APIs.

As it turns out, this is the easy part. The AWS Amplify library, makes it a cinch to access the User Pool tokens through the use of the Auth.currentSession() function. All we need to do is extract the correct token (in this case the idToken) and brandish it in requests to our backend APIs:

export const callBackend = async () => {
    let session = await Auth.currentSession();
    return axios.get(`${BACKEND_URI}/helloWorld`, {
        headers: {
            'Accept': 'application/json',
            'Authorization': session.idToken.jwtToken
        }
    });
};

What’s more, AWS Amplify transparently handles the refreshing of the tokens.

So what do you do with the token on the backend? You could verify it in-code. This is entirely possible, but not straightforward (it involves converting the JWT to a .pem, then validating against a set of public Java Web Keys — incidentally exactly what our Lambda@Edge function does).

A better idea, if you can, is to use Amazon API Gateway. The same can be achieved with just a few lines of Cloudformation, and your code need not worry about JWKs-and-JWTs-and-pems.

Putting it all together

Whew! That’s the theory dispensed with, let’s see it in practice.

Clearly there are a lot of moving parts to this solution. So, instead of embarking on a point-and-click screenshot epic, we’ve provided a complete solution here.

The bulk of the work is encapsulated in a SAM template, and the README contains instructions on how to deploy it, create a User Pool, build the frontend etc.

If you follow all of the instructions, you’ll end up with the following:

  •  A Cognito User Pool and Client Application
  • An S3 bucket, containing a compiled React application (the frontend)
  • A CloudFront distribution fronting the S3 bucket.
  • A Lambda@Edge function associated with the CloudFront distribution.
  • A ‘vanilla’ Lambda function, acting as a backend API.
  • An API Gateway fronting the backend API, with an Authorizer applied

Here are some salient points:

  • The Authorizer applied to our API Gateway endpoint is configured like this:
Auth:
  DefaultAuthorizer: CognitoAuth
  # Don't authenticate Cors pre-flight requests:
  AddDefaultAuthorizerToCorsPreflight: false
 Authorizers:
    CognitoAuth:
      UserPoolArn: !Ref CognitoUserPoolArn

Pretty simple! That will ensure that all requests have a correct JWT in their Authorization header, otherwise they won’t get handed to the backend.

  • The React code in the Amazon S3 bucket has been split, as detailed above. The protected portion of it is located under static/js/protected.
  • The code for the Lambda@Edge function (almost entirely taken from here and here) starts with this clause:
const cfrequest = event.Records[0].cf.request;
if (!cfrequest.uri.startsWith("/static/js/protected")) {
    // Request is not for protected content. Pass through
    console.log(`request for non-protected content: ${cfrequest.uri}`);
    callback(null, cfrequest);
    return true;
}

If the request makes it past this clause, it is for protected content, and the JWT cookie is verified.

Does it actually work?

Let’s find out.

We should not be able to access anything under /static/js/protected.

$ curl -i 
https://12345.cloudfront.net/static/js/protected/a.0f706854.chunk.jsHTTP/1.1 401 Unauthorized

…but we should be able to access the main chunk (i.e. the Login component):

$ curl -i

https://12345.cloudfront.net/static/js/main.2e7f9543.chunk.js
HTTP/1.1 200 OK

So far so good.

Let’s fire up a browser and visit https://12345.cloudfront.net. We should be immediately implored to Login (remember, this is the public Login component):

Hit Sign In and we should see a Login form. This is withAuthenticator, doing its thing:

Of course, we don’t have an account yet. Hit Create Account, and a form like this appears:

Once the form has been submitted, a verification code will be sent to the provided email. After we have entered the verification code, we are free to Login.

Doing so will cause a redirect to the Protected Zone. The JWT-cookie is sent with the request, and the Lambda@Edge function allows us to download the protected code.

Finally a call is made to the backend API and a specially tailored salutation is sent back! Happy days.

End

Just, end.

Thanks (genuinely) for reading!

 

*The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.*