AWS Database Blog

IPFS on AWS, Part 2: Deploy a production IPFS cluster on Amazon EKS

This series of posts provides a comprehensive introduction to running IPFS (InterPlanetary File System) on AWS:

  • In Part 1, we introduce the IPFS concepts and test IPFS features on an Amazon Elastic Compute Cloud (Amazon EC2) instance
  • In Part 2, we propose a reference architecture and build an IPFS cluster on Amazon Elastic Kubernetes Service (Amazon EKS)
  • In Part 3, we deploy an NFT smart contract using Amazon Managed Blockchain and illustrate the use of IPFS as a way to decentrally store NFT-related data

In this second part of our series, we will deploy a production IPFS cluster on Amazon EKS. The IPFS cluster is deployed step-by-step so you can get an understanding of all the required components. These steps can be further automated at part of an infrastructure as code (IaC) strategy. We tried to limit the dependencies between the different sections, so you can can pause and resume implementing the solution after any section. To keep the commands as light as possible, the snippets are not all idempotent. All the steps are to be performed by a single user with sufficient permissions, but a more granular separation of duties can be applied leveraging AWS IAM.

Solution overview

The following diagram illustrates the architecture we will deploy.

ipfs-cluster-architecture

The following are some key elements to consider:

  • IPFS and the IPFS cluster (as designated in the diagram) are two different components running on different pods.
  • The IPFS server contains different services with different requirements. Firstly, port 4001 needs to be reachable by other nodes from the IPFS network. The IPFS gateway can be accessed from the internet, but we chose to protect it behind an Application Load Balancer with Amazon Cognito authentication. Finally, the API should only be accessed by the IPFS cluster.
  • Two Elastic IPs are assigned to the Network Load Balancer in front of the IPFS pods. This allows the IPFS servers to advertise these IP addresses to the other IPFS peers on the IPFS network (therefore contributing to the overall health of the IPFS network).
  • The IPFS cluster component is load balanced and only accessible from the jumpbox.
  • The architecture spans two Availability Zones, but it can easily be extended to three.
  • Amazon EKS storage is configured on EFS One Zone using the Amazon EFS CSI driver.
  • This architecture does not include a cloud delivery network component, but instead promotes the development of IPFS-compliant applications to globally distribute files through the IPFS network. If you have dependencies on classic web application clients connecting from multiple geographies, you may want to use Amazon CloudFront, and authenticate users with a Cognito@Edge solution such as the one documented in the following GitHub repo.

We walk you through the following high-level steps:

  1. Create a VPC using Amazon Virtual Private Cloud (Amazon VPC).
  2. Create a jumpbox.
  3. Set up Amazon EKS.
  4. Generate a certificate.
  5. Prepare an Amazon Cognito user group.
  6. Create two Elastic IPs.
  7. Prepare manifests.
  8. Create an Amazon EFS storage class.
  9. Deploy IPFS servers.
  10. Deploy the IPFS cluster.
  11. Test the cluster.

Create a VPC

We start by creating a VPC with two public subnets and two private subnets.

  1. On the Amazon VPC console, choose Create VPC.
  2. For Resources to create, select VPC and more
  3. For Name tag auto-generation, select Auto-generate and enter ipfs.
  4. Enter 10.0.0.0/22 for IPv4 CIDR block.
  5. For IPv6 CIDR block, select No Ipv6 CIDR block.
  6. For Tenancy, choose Default.
  7. Specify the number of Availability Zones (for this post, two) and subnets (two public and two private).
  8. Under Customize subnets CIDR blocks, enter 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24, and 10.0.3.0/24 for the CIDR blocks for your subnets.
  9. For NAT gateways, select 1 per AZ.
  10. For VPC endpoints, select None.
  11. For DNS options, select Enable DNS hostnames and Enable DNS resolution.
  12. Choose Create VPC.
  13. After the VPC is created, go to the Subnets page on the Amazon VPC console.
  14. Select each public subnet individually and on the Actions menu, choose Edit subnet settings.
  15. Select Enable auto-assign public IPv4 address to allow automatic IP allocation, and choose Save.

Create a jumpbox

We will use the EC2 Instance Connect method to connect to the jumpbox, so we first need to lookup the IP address range assigned to this service in your region. Choose the CloudShell icon in the navigation pane and enter the following command:

curl -s https://ip-ranges.amazonaws.com/ip-ranges.json| jq -r ".prefixes[] | select(.region==\"$AWS_REGION\") | select(.service==\"EC2_INSTANCE_CONNECT\") | .ip_prefix"

Take note of the returned IP address range (18.202.216.48/29 in the case of the Ireland region, for example).

  1. On the Amazon EC2 console, choose Instances in the navigation pane.
  2. Choose Launch an instance and create an instance with the following parameters:
    • For Name, enter jumpbox.
    • For Amazon Machine Image (AMI), choose Ubuntu Server 22.04 LTS (HVM), SSD Volume Type.
    • For Architecture, choose 64-bit (x86).
    • For Instance type, choose t2.micro.
    • For Key pair (login), choose Proceed without key pair.
    • For Network settings, choose Edit:
      • set the VPC to the VPC you previously created
      • set Subnet to the first public subnet
      • enable Auto-assign public IP
      • for Source type (Security group rule 1), choose Custom and enter the IP address range previously looked up.
    • Keep all other default parameters, and choose Launch instance.
  3. After you create the instance, on the Instances page, select the instance and choose Connect.
  4. Connect with the EC2 Instance Connect method.
    You will see a prompt similar to the following in a new tab:

    ubuntu@<ipfs_instance_private_address>:~$
  5. If you do not already have access keys for your user, create access keys (refer to Get your access keys).
  6. Run the following commands to install jq, unzip, kubectl and awscliv2. Refer to Setting up the AWS CLI if you’re not familiar with the AWS CLI configuration process.
    sudo apt-get update && \
    sudo apt-get install -y jq unzip && \
    sudo apt-get install -y ca-certificates curl && \
    curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg && \
    echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
    sudo apt-get update && \
    sudo apt-get install -y kubectl && \
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
    unzip awscliv2.zip && \
    sudo ./aws/install && \
    aws configure
    

All command-line instructions in the rest of this post should be run from an EC2 Instance Connect session.

Set up Amazon EKS

To set up Amazon EKS, complete the following steps.

  1. Configure AWS Identity and Access Management (IAM) roles:
  2. On the Amazon EC2 console, choose Security groups in the navigation pane and create a security group with the following parameters:
    • For Name, enter ipfs-eks-control-plane-sg.
    • For Description, enter IPFS EKS control plane security group.
    • For VPC, choose the ipfs-vpc previously created.
    • For Inbound rules, add a rule allowing all traffic from the jumpbox’s security group.
    • Keep all other default parameters.
  3. On the Amazon EKS console, create a new cluster with the following parameters:
    • For Name, enter ipfs-cluster.
    • For Kubernetes version, choose 1.27.
    • For Cluster service role, choose eksClusterRole.
    • For VPC, choose ipfs-vpc created previously.
    • For Subnets, choose the two private subnets.
    • For Security group, choose ipfs-eks-control-plane-sg.
    • For Cluster endpoint access, choose Private.
    • Keep all other default parameters.
  4. When the cluster is ready, choose Add node group, under Compute in the navigation pane and create a managed node group with the following parameters:
    • For Name, enter ipfs-nodes.
    • For Node IAM role, choose AmazonEKSNodeRole.
    • For AMI type, choose Amazon Linux 2 (AL2_x86_64).
    • For Instance type, choose t3.medium.
    • For Desired size, choose 2.
    • For Minimum size, choose 2.
    • For Maximum size, choose 4.
    • For Subnets, choose the two private subnets.
  5. After the managed node group is created, configure kubectl to access the cluster:
    REGION=$(aws configure get region) && \
    aws eks update-kubeconfig --region $REGION --name ipfs-cluster && \
    kubectl get nodes
  6. Install helm (for more information, refer to Using Helm with Amazon EKS):
    curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh && \
    chmod 700 get_helm.sh && \
    ./get_helm.sh
  7. Install eksctl (refer to Installing or updating eksctl):
    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp && \
    sudo mv /tmp/eksctl /usr/local/bin
  8. Install the AWS Load Balancer Controller (refer to Installing the AWS LoadBalancer Controller add-on):
    ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) && \
    REGION=$(aws configure get region) && \
    curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.4/docs/install/iam_policy.json && \
    aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json && \
    eksctl utils associate-iam-oidc-provider --region=$REGION --cluster=ipfs-cluster --approve && \
    eksctl create iamserviceaccount \
    --cluster=ipfs-cluster \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --role-name "AmazonEKSLoadBalancerControllerRole" \
    --attach-policy-arn=arn:aws:iam::$ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
    --override-existing-serviceaccounts \
    --approve && \
    helm repo add eks https://aws.github.io/eks-charts && \
    helm repo update && \
    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
    -n kube-system \
    --set clusterName=ipfs-cluster \
    --set serviceAccount.create=false \
    --set serviceAccount.name=aws-load-balancer-controller

    Check aws-load-balancer-controller deployment is ready:

    kubectl get deployment -n kube-system aws-load-balancer-controller
  9. Tag the two public subnets so they can be automatically discovered by the AWS Load Balancer Controller (see Subnet Auto Discovery for more details):
    • For Key, enter kubernetes.io/role/elb.
    • For Value, enter 1.
  10. Similarly, tag the two private subnets:
    • For Key, enter kubernetes.io/role/internal-elb
    • For Value, enter 1.
  11. Install the Amazon EFS driver (refer to Amazon EFS CSI driver):
    ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) && \
    REGION=$(aws configure get region) && \
    curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json && \
    aws iam create-policy \
    --policy-name AmazonEKS_EFS_CSI_Driver_Policy \
    --policy-document file://iam-policy-example.json && \
    eksctl create iamserviceaccount \
    --cluster ipfs-cluster \
    --namespace kube-system \
    --name efs-csi-controller-sa \
    --attach-policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AmazonEKS_EFS_CSI_Driver_Policy \
    --approve \
    --region $REGION && \
    helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/ && \
    helm repo update && \
    helm upgrade -i aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
    --namespace kube-system \
    --set image.repository=602401143452.dkr.ecr.$REGION.amazonaws.com/eks/aws-efs-csi-driver \
    --set controller.serviceAccount.create=false \
    --set controller.serviceAccount.name=efs-csi-controller-sa

    Check the Amazon EFS driver has started:

    kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver"
  12. Create EFS filesystems and mount targets:
    REGION=$(aws configure get region) && \
    AZ_1=${REGION}a && \
    AZ_2=${REGION}b && \
    aws efs create-file-system --region $REGION --availability-zone-name $AZ_1 --performance-mode generalPurpose --tags 'Key=Name,Value=efs1' && \
    aws efs create-file-system --region $REGION --availability-zone-name $AZ_2 --performance-mode generalPurpose --tags 'Key=Name,Value=efs2' && \
    FSID_1=$(aws efs describe-file-systems --query 'FileSystems[?Tags[?Key == `Name`&& Value == `efs1`]] | [0].FileSystemId' --output text) && \
    FSID_2=$(aws efs describe-file-systems --query 'FileSystems[?Tags[?Key == `Name`&& Value == `efs2`]] | [0].FileSystemId' --output text) && \
    subnetid1=$(aws ec2 describe-subnets \
    --filters "Name=tag:Name, Values=ipfs-subnet-private1-$AZ_1" \
    --query 'Subnets[0].SubnetId' \
    --output text) && \
    subnetid2=$(aws ec2 describe-subnets \
    --filters "Name=tag:Name, Values=ipfs-subnet-private2-$AZ_2" \
    --query 'Subnets[0].SubnetId' \
    --output text) && \
    security_group_id=`aws eks describe-cluster --name ipfs-cluster | jq -r ".cluster.resourcesVpcConfig.clusterSecurityGroupId"` && \
    aws efs create-mount-target \
    --file-system-id $FSID_1 \
    --subnet-id $subnetid1 \
    --security-groups $security_group_id && \
    aws efs create-mount-target \
    --file-system-id $FSID_2 \
    --subnet-id $subnetid2 \
    --security-groups $security_group_id

    If you receive an error indicating that the EFS One Zone storage class isn’t available in a specific Availability Zone, you can use EFS Standard, use another Region, or update the procedure to use specific Availability Zones.

Generate a certificate

For instructions on creating a certificate for your domain, refer to Requesting a public certificate. If you don’t have a domain or want to create a new domain managed by AWS, you can also register a domain name using Amazon Route 53. For more details, refer to Registering and managing domains using Amazon Route 53.

When done, record the domain name that you used and the ARN of the certificate:

DOMAIN_NAME=<your certificate domain name e.g. ipfs-gateway.example.com>
CERTIFICATE_ARN=<arn:aws:acm:<region>:xxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>

Prepare an Amazon Cognito user group

To protect access to the IPFS gateway, we will implement an Application Load Balancer that will authenticate users against an Amazon Cognito user group. To create an Amazon Cognito user group, use the following code:

# Create User Pool
aws cognito-idp create-user-pool --pool-name ipfs-user-pool
USER_POOL_ID=$(aws cognito-idp list-user-pools --max-results 10 --query 'UserPools[?Name == `ipfs-user-pool`].Id' --output text)

# Create Domain
ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
COGNITO_DOMAIN=ipfs-gateway-$ACCOUNT_ID
aws cognito-idp create-user-pool-domain --user-pool-id $USER_POOL_ID --domain $COGNITO_DOMAIN

#Create User Pool Client
aws cognito-idp create-user-pool-client --user-pool-id $USER_POOL_ID --client-name ipfs-user-pool-app-client --callback-urls=https://$DOMAIN_NAME/oauth2/idpresponse --generate-secret --supported-identity-providers COGNITO --explicit-auth-flows ALLOW_USER_PASSWORD_AUTH ALLOW_REFRESH_TOKEN_AUTH --allowed-o-auth-flows-user-pool-client --allowed-o-auth-flows code --allowed-o-auth-scopes email openid profile

# Create Test User
aws cognito-idp admin-create-user --user-pool-id $USER_POOL_ID --username testuser --temporary-password testuserP4ssw0rd#

Create two Elastic IPs

Create your Elastic IPs with the following code:

aws ec2 allocate-address --tag-specifications 'ResourceType=elastic-ip, Tags=[{Key=Name,Value=ipfs-eip-1}]'
aws ec2 allocate-address --tag-specifications 'ResourceType=elastic-ip, Tags=[{Key=Name,Value=ipfs-eip-2}]'

Prepare manifests

Kubernetes template manifests corresponding to the presented architecture have been prepared. Update them using all the objects that you have created:

  1. If you haven’t already, record the ARN of the certificate:
    CERTIFICATE_ARN=<arn:aws:acm:<region>:xxxxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>
  2. Configure all environment variables (you may want to double check they are all properly set before going further):
    ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text) && \
    REGION=$(aws configure get region) && \
    AZ_1=${REGION}a && \
    AZ_2=${REGION}b && \
    EIP_1=$(aws ec2 describe-addresses --filters Name=tag:Name,Values=ipfs-eip-1 --query Addresses[0].PublicIp --output text) && \
    EIP_2=$(aws ec2 describe-addresses --filters Name=tag:Name,Values=ipfs-eip-2 --query Addresses[0].PublicIp --output text) && \
    EIP_ALLOC_1=$(aws ec2 describe-addresses --filters Name=tag:Name,Values=ipfs-eip-1 --query Addresses[0].AllocationId --output text) && \
    EIP_ALLOC_2=$(aws ec2 describe-addresses --filters Name=tag:Name,Values=ipfs-eip-2 --query Addresses[0].AllocationId --output text) && \
    USER_POOL_ID=$(aws cognito-idp list-user-pools --max-results 10 --query 'UserPools[?Name == `ipfs-user-pool`].Id' --output text) && \
    USER_POOL_ARN=arn:aws:cognito-idp:$REGION:$ACCOUNT_ID:userpool/$USER_POOL_ID && \
    USER_POOL_CLIENT_ID=$(aws cognito-idp list-user-pool-clients --user-pool-id $USER_POOL_ID --query 'UserPoolClients[0].ClientId' --output text) && \
    COGNITO_DOMAIN=ipfs-gateway-$ACCOUNT_ID && \
    FSID_1=$(aws efs describe-file-systems --query 'FileSystems[?Tags[?Key == `Name`&& Value == `efs1`]] | [0].FileSystemId' --output text) && \
    FSID_2=$(aws efs describe-file-systems --query 'FileSystems[?Tags[?Key == `Name`&& Value == `efs2`]] | [0].FileSystemId' --output text)
  3. Prepare the manifests:
    # efs-storageclass.yaml manifest
    wget https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/DBBLOG-3086/manifest/efs-storageclass_template.yaml && \
    cat efs-storageclass_template.yaml | \
    sed -e s/FSID_1/$FSID_1/ | \
    sed -e s/FSID_2/$FSID_2/ \
    > efs-storageclass.yaml
    
    # Prepare ipfs.yaml manifest
    wget https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/DBBLOG-3086/manifest/ipfs_template.yaml && \
    cat ipfs_template.yaml | \
    sed -e s#AZ_1#$AZ_1# | \
    sed -e s#AZ_2#$AZ_2# | \
    sed -e s#EIP_1#$EIP_1# | \
    sed -e s#EIP_2#$EIP_2# | \
    sed -e s#EIP_ALLOC_1#$EIP_ALLOC_1# | \
    sed -e s#EIP_ALLOC_2#$EIP_ALLOC_2# | \
    sed -e s#USER_POOL_ARN#$USER_POOL_ARN# | \
    sed -e s#USER_POOL_CLIENT_ID#$USER_POOL_CLIENT_ID# | \
    sed -e s#COGNITO_DOMAIN#$COGNITO_DOMAIN# | \
    sed -e s#CERTIFICATE_ARN#$CERTIFICATE_ARN# \
    > ipfs.yaml
    
    # Prepare ipfs-cluster.yaml manifest
    wget https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/DBBLOG-3086/manifest/ipfs-cluster_template.yaml && \
    cat ipfs-cluster_template.yaml | \
    sed -e s/AZ_1/$AZ_1/ | \
    sed -e s/AZ_2/$AZ_2/ \
    > ipfs-cluster.yaml

Create an EFS storage class

Create your EFS storage class with the following code:

kubectl create -f efs-storageclass.yaml

Deploy IPFS servers

Complete the following steps:

  1. Deploy two independent IPFS servers in two different Availability Zones:
    wget https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/DBBLOG-3086/configmap/001-update-config.sh && \
    mkdir -p mkdir -p configmap/container-init.d/ && \
    mv 001-update-config.sh configmap/container-init.d/ && \
    kubectl create configmap ipfs-config-script --from-file=configmap/container-init.d/ && \
    kubectl create -f https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/DBBLOG-3086/manifest/ipfs-pvc.yaml && \
    kubectl create -f ipfs.yaml
  2. Wait for the Application Load Balancer to be provisionned, and record its DNS address:
    ALB_DNS=$(kubectl get ingress/ipfs-gateway-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
  3. Create a CNAME for your DOMAIN_NAME pointing to this DNS address.
  4. Validate that you can now connect to the IPFS gateway using a URL similar to https://<DOMAIN_NAME>/ipfs/Qme7ss3ARVgxv6rXqVPiikMJ8u2NLgmgszg13pYrDKEoiu
  5. Connect with user name testuser and password testuserP4ssw0rd#:

Deploy the IPFS-cluster

To deploy the IPFS cluster, complete the following steps:

  1. Download the ipfs-cluster-service binary that we will use to generate the ipfs-cluster configuration:
    # Download ipfs-cluster-service
    wget https://dist.ipfs.tech/ipfs-cluster-service/v1.0.6/ipfs-cluster-service_v1.0.6_linux-amd64.tar.gz && \
    tar xzf ipfs-cluster-service_v1.0.6_linux-amd64.tar.gz && \
    rm ipfs-cluster-service_v1.0.6_linux-amd64.tar.gz
  2. Prepare the configuration of each cluster peer and save them as configmaps:
    # Generate config
    ./ipfs-cluster-service/ipfs-cluster-service init --consensus crdt && \
    mv /home/ubuntu/.ipfs-cluster ./config-peer-1 && \
    ./ipfs-cluster-service/ipfs-cluster-service init --consensus crdt && \
    mv /home/ubuntu/.ipfs-cluster ./config-peer-2 && \
    cp config-peer-1/service.json config-peer-2/ && \
    PEER1_ID=$(cat config-peer-1/identity.json | jq -r '.id') && \
    PEER2_ID=$(cat config-peer-2/identity.json | jq -r '.id') && \
    echo /dns4/ipfs-cluster-peer-1/tcp/9096/p2p/$PEER1_ID >> config-peer-2/peerstore && \
    echo /dns4/ipfs-cluster-peer-2/tcp/9096/p2p/$PEER2_ID >> config-peer-1/peerstore && \
    
    # Update both services.json files
    cat config-peer-1/service.json | \
    jq '.cluster.peername = "peer-1"' | \
    jq '.api.restapi.http_listen_multiaddress = "/ip4/0.0.0.0/tcp/9094"' | \
    jq '.ipfs_connector.ipfshttp.node_multiaddress = "/dns4/ipfs-rpc-api-peer-1/tcp/5001"' \
    > config-peer-1/service.json.new && mv config-peer-1/service.json.new config-peer-1/service.json && \
    cat config-peer-2/service.json | \
    jq '.cluster.peername = "peer-2"' | \
    jq '.api.restapi.http_listen_multiaddress = "/ip4/0.0.0.0/tcp/9094"' | \
    jq '.ipfs_connector.ipfshttp.node_multiaddress = "/dns4/ipfs-rpc-api-peer-2/tcp/5001"' \
    > config-peer-2/service.json.new && mv config-peer-2/service.json.new config-peer-2/service.json && \
    
    # Create configmaps
    kubectl create configmap ipfs-cluster-config-peer-1 --from-file=config-peer-1/ && \
    kubectl create configmap ipfs-cluster-config-peer-2 --from-file=config-peer-2/
  3. Deploy the ipfs-cluster:
    kubectl create -f https://aws-blogs-artifacts-public.s3.amazonaws.com/artifacts/DBBLOG-3086/manifest/ipfs-cluster-pvc.yaml && \
    kubectl create -f ipfs-cluster.yaml
  4. Check that you can see ** IPFS Cluster is READY ** in the logs of the ipfs-cluster-deployment pods.

Test the cluster

Now we can test the cluster.

  1. Download the ipfs-cluster-ctl binary:
    wget https://dist.ipfs.tech/ipfs-cluster-ctl/v1.0.6/ipfs-cluster-ctl_v1.0.6_linux-amd64.tar.gz && \
    tar xzf ipfs-cluster-ctl_v1.0.6_linux-amd64.tar.gz && \
    rm ipfs-cluster-ctl_v1.0.6_linux-amd64.tar.gz
  2. Validate you can connect to the cluster:
    IPFS_CLUSTER_NLB=$(kubectl get service/ipfs-cluster-api -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') && \
    ./ipfs-cluster-ctl/ipfs-cluster-ctl --host /dns4/$IPFS_CLUSTER_NLB/tcp/9094 id

    The Network Load Balancer will forward the request to one of the IPFS cluster peers. If you repeat the last command multiple times you will connect to both pods and check that they can see each other.

  3. Let’s pin a CID:
    curl -o aws.png https://a0.awsstatic.com/libra-css/images/logos/aws_logo_smile_179x109.png && \
    ./ipfs-cluster-ctl/ipfs-cluster-ctl --host /dns4/$IPFS_CLUSTER_NLB/tcp/9094 add aws.png && \
    ./ipfs-cluster-ctl/ipfs-cluster-ctl --host /dns4/$IPFS_CLUSTER_NLB/tcp/9094 pin ls

Congratulations! We now have a working IPFS cluster.

Lockdown jumpbox

To prevent futher use of the AWS CLI from the jumpbox, delete the local credentials and configuration:

rm -f ~/.aws/credentials && \
rm -f ~/.aws/config

Conclusion

In this post, we showed how to deploy a production IPFS cluster on Amazon EKS. In Part 3 of this series, we show how we can use the IPFS setup that we have created to store NFT-related data.

To study how to make the proposed architecture serverless, you can also refer to this post: Deploying IPFS Cluster using AWS Fargate and Amazon EFS One Zone


About the Author

Guillaume Goutaudier is a Sr Partner Solution Architect at AWS. He helps companies build strategic technical partnerships with AWS. He is also passionate about blockchain technologies, and a member of the Technical Field Community for blockchain.