Set up, install, and host an open source container-based game server hosting and matchmaking solution on Amazon EKS
This Guidance shows game developers how to automate the setup of global game servers. It provides step-by-step instructions for configuring a Kubernetes cluster that orchestrates the Agones and OpenMatch open source frameworks on Amazon Elastic Kubernetes Service (Amazon EKS). By leveraging infrastructure as code, this approach simplifies the deployment process on Amazon EKS and allows developers to run the same system locally on Kubernetes. The architecture diagram also demonstrates how to establish a multi-cluster configuration for optimizing game session placement based on global latency. This not only reduces lag and improves response times but also enhances the overall gameplay experience.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Connect to AWS Global Accelerator endpoint asking for match allocation.
Step 2
Route the request through a Network Load Balancer to the Open Match Game Frontend container.
Step 3
Amazon CloudWatch enables observability as the Director container receives and processes match requests with player data.
Step 4
Group the players' tickets with the Match Making function, according to the function’s criteria, and return a match ticket to the Director container.
Step 5
Request a match allocation from the Agones Allocator container. The request can specify a different Region, defined by the Match Making function.
Step 6
If the Region 1 has the lowest latency to the players, allocate a game server on the same Region with the Agones Allocator container.
Step 7
Alternatively, route the request to the Agones Allocator on Region 2, through a virtual private cloud (VPC) peering connection and Classic Load Balancer, and allocate a server in that Region.
Step 8
Return the internal IP address and port of the game server to the Director container.
Step 9
Translate the internal IP address and port to the equivalent Global Accelerator address and port for the Director container. Then, send the translated internal IP address to the Frontend container.
Step 10
Send the game server connection details, containing a Global Accelerator address and custom routing port, back to the players.
Step 11
Connect to the Global Accelerator for the designated cluster in a port that gets routed to the allocated game container for the match.
Step 12
Route the connection to the allocated game container.
Step 13
Encrypt Agones and Open Match certificates with AWS Key Management Service (AWS KMS).
Note:
- Steps 1-10 use gRPC with TLS/SSL.
- Steps 11-12 use the game specific protocol.
- All the containers are pulled from Amazon Elastic Container Registry (Amazon ECR).
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
This Guidance uses the Amazon Elastic Kubernetes Service (Amazon EKS) control plane to provision clusters with Amazon Elastic Compute Cloud (Amazon EC2) managed node groups. The nodes are built using a launch template. By providing you with a consistent and repeatable way to build clusters through the Amazon EKS API, the amount of human error and lead time are both reduced.
Also, for containers running on the Amazon EKS cluster, you can export logs and metrics to Amazon CloudWatch, which allows for observability on the cluster to proactively identify issues.
-
Security
An AWS KMS key is created implicitly by Terraform scripts when an Amazon EKS cluster is launched. That key is used to encrypt Kubernetes secrets stored in Amazon EKS (including Docker registry credentials to pull images, TLS keys, and other certificates used by Agones and Open Match). This implements envelope encryption, which is considered a security best practice for an in-depth security strategy.
Amazon GuardDuty is another service designed to protect accounts and can be configured with this architecture. It surfaces security risks by analyzing audit logs and runtime activity on the Amazon EKS cluster. Amazon EKS is also used for runtime monitoring in GuardDuty to detect potential threats in the game servers and matchmaking components.
Global Accelerator also supports a secure infrastructure. It gives you the ability to have fine-grained control over source ports, protocols, and client affinity. This allows traffic to flow from game clients on the internet to game servers running on Amazon EC2 worker nodes and launched by Amazon EKS. You can restrict access from source IPs that are not safe.
Since Global Accelerator is used, you also benefit from AWS Shield Standard protection to mitigate distributed denial-of-service (DDoS) attempts. With Shield Standard, only valid traffic from safe client IPs is allowed to flow through the accelerator and reach endpoint groups for the game containers.
-
Reliability
Network Load Balancer and Classic Load Balancer provide ingress (routing traffic from outside the cluster) to deployments running inside the Amazon EKS cluster. Ingress traffic is routed to a Kubernetes service that terminates traffic on a Kubernetes pod. Network Load Balancer and Classic Load Balancer provide fixed ingress endpoints in the Regions for each Amazon EKS cluster where internet traffic is routed from Global Accelerator. Containers on Amazon EKS clusters can be upgraded in a rolling fashion while Network Load Balancer and Classic Load Balancer continue to redirect traffic to the services without disruption.
Network Load Balancer provides ingress for the frontend service on the primary Amazon EKS cluster. With Network Load Balancer, you can scale your frontend by running multiple pods for the deployment and expose the frontend service as targets through the load balancer. If one frontend pod crashes or is unavailable, then Network Load Balancer will route traffic to the next available pod.
Classic Load Balancer provides ingress for the allocator service running on the secondary Amazon EKS cluster. With Classic Load Balancer, you can scale the allocator service by running multiple pods for the deployment and exposing them as targets through the load balancer. If one allocator pod crashes or is unavailable, then the Classic Load Balancer will route traffic to the next available pod.
-
Performance Efficiency
Since Amazon ECR is integrated directly with Amazon EKS, it simplifies the delivery of Agones and Open Match containers that are deployed on the Amazon EKS cluster. Amazon ECR eliminates the need to operate and scale the infrastructure required to store and pull container images for the game containers, Agones, and Open Match components. Regardless of the size of the Amazon EKS clusters in terms of pods, Amazon ECR will scale to the deployment. Amazon ECR provides the ability to pull and push images in the same Region where the Amazon EKS clusters live, which provides the best performance in terms of latency and data transfer costs.
-
Cost Optimization
The Amazon EKS worker nodes running game servers can be configured to run on Amazon EC2 Spot Instances. Amazon EKS worker nodes running Agones and Open Match components can leverage Savings Plans. Nodes that run the game servers and Kubernetes kube-system can also potentially use AWS Graviton Processors-based instances.
You can save up to 90% on the compute costs for game server hosting when using Spot Instances to run game servers. You can optimize the compute costs for Agones and Open Match containers by up to 72% with Graviton instances covered by Savings Plans.
-
Sustainability
The AWS Carbon Footprint Tool allows you to track and measure carbon emissions from Amazon EKS clusters and improve the impact of worker nodes to support sustainable workloads.
With these services, you can manage and track carbon emissions to prevent waste and achieve sustainable computing. They optimize the usage of Amazon EKS worker nodes at the Region level and provide mechanisms to quickly react when carbon emissions goals are not met.
Implementation Resources
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.