AWS Open Source Blog

Integrating Amazon EFS with Podman running on Red Hat Enterprise Linux

This post was written by Mayur Shetty and Vani Eswarappa.

Podman is a daemonless open source, Linux-native tool designed for finding, running, building, sharing, and deploying applications using Open Containers Initiative (OCI) containers and container images on a Red Hat Enterprise Linux (RHEL) system. Similar to other container engines, such as Docker, Podman depends on an OCI-compliant container runtime to interact with the operating system and create the running containers. Podman manages the container ecosystem, which includes pods, containers, container images, and container volumes using the libpod library.

Containers that are controlled by Podman can be run either by root or by a non-privileged user. This setup is an alternative to Docker containers when you need increased security, unique identifier (UID) separation using namespaces, and integration with systemd. If you are using container-based application using Podman, you may have requirements to scale your compute and storage layers .

In this post, we exlain how to scale a Podman-based container application at the compute layer and storage layer using Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic File System (Amazon EFS).

Amazon EFS provides a scalable, fully managed, elastic NFS file system that lets you share file data without provisioning or managing storage infrastructure. It can be used with Amazon Web Services (AWS) cloud services and on-premises resources, and it is built to scale on demand to petabytes without disrupting applications. With Amazon EFS, you can grow and shrink your file systems automatically as you add and remove files, reducing the need to provision and manage capacity to accommodate growth.

Let’s walk through instructions for deploying a sample web application—a photo gallery application—using Podman on a RHEL Amazon EC2 instance, where the images displayed by the website are stored on the Amazon EFS mounted on Amazon EC2 instances across multiple Availability Zones, providing scalability and high availability (HA) to the application.

Prerequisites

For this tutorial, you’ll need the following prerequisites:

  • Amazon Virtual Private Cloud (Amazon VPC) with public subnet created.
  • RHEL EC2 instance launched within your Amazon VPC.
  • Podman already installed on Amazon EC2 instance.
  • Amazon EFS created within the Amazon VPC, and Amazon EC2 security group is added in the access EFS.
  • For the photo gallery, we will use the Linuxserver.io photoshow container image.

Solution overview

We will walk through the following steps.

  1. The Podman container will run in a RHEL EC2 instance and use the local filesystem on the Amazon EC2 instance to store the images (no HA).
  2. We will do what we did in the previous step, but this time we store the images on an Amazon EFS (storage-level HA).
  3. We will make the solution HA by adding a second Amazon EC2 instance on another Availability Zone and adding an Application Load Balancer in front of it (compute-level HA added).
  4. We will take care of scaling the solution by adding an Auto Scaling Group (scaling added).

Step 1: Run Podman container on RHEL EC2 instance with local file system on Amazon EC2 instance (no HA for compute or storage)

To start, you must connect to the Amazon EC2 instance through the SSH client using an SSH key pair.

Next, download the photoshow container image. Select the registry from the list of options that works for you:

[ec2-user@ip-172-31-58-150 ~]$ $ sudo podman pull linuxserver/photoshow

[ec2-user@ip-172-31-58-150 ~]$ sudo podman images

REPOSITORY                                   TAG   IMAGE ID  CREATED    SIZE

ghcr.io/linuxserver/photoshow                latesteb0ad054517e  5 days ago 222 MB

Here we are making sure that there are no containers running on the Amazon EC2 instance:

[ec2-user@ip-172-31-58-150 ~]$ sudo podman ps -a

CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES

Next, we’ll create a directory on the host machine and create a file in that directory.

[ec2-user@ip-172-31-58-150 ~]$ pwd

/home/ec2-user

[ec2-user@ip-172-31-58-150 ~]$ mkdir photo

[ec2-user@ip-172-31-58-150 ~]$ mkdir -p photo/config

[ec2-user@ip-172-31-58-150 ~]$ mkdir -p photo/pictures

[ec2-user@ip-172-31-58-150 ~]$ mkdir -p photo/thumb

[ec2-user@ip-172-31-58-150 ~]$ ls -l photo/

total 8

drwxrwxr-x. 7 ec2-user ec2-user   64 May 20 00:16 config

drwxrwxr-x. 2 ec2-user ec2-user 4096 May 20 03:12 pictures

drwxrwxr-x. 4 ec2-user ec2-user   32 May 20 00:16 thumb

Now we run the container with the podman -v command to point the source and where we want it mounted into the container.

[ec2-user@ip-172-31-58-150 ~]$ sudo podman run -d   \

* -name=photoshow   \
* e PUID=1000   -e PGID=1000   -e TZ=Europe/London   -p 8080:80   \
* v /home/ec2-user/photo/config:/config:Z   \
* v /home/ec2-user/photo/pictures:/Pictures:Z   \
* v /home/ec2-user/photo/thumb:/Thumbs:Z   \
* -restart unless-stopped   linuxserver/photoshow

The preceding :Z confirms that the proper SELinux context is set:

[ec2-user@ip-172-31-58-150 ~]$ sudo podman ps -a

CONTAINER ID  IMAGE              COMMAND  CREATED    STATUS        PORTS             NAMES

0d011645d26d  linuxserver/photoshow       9 minutes ago  Up 9 minutes ago  0.0.0.0:8080->80/tcp  photoshow

Next, we download a few images into the /photo/pictures directory of the host EC2 instance using wget:

[ec2-user@ip-172-31-58-150 ~]$ cd photo/pictures/

[ec2-user@ip-172-31-58-150 ~]$ wget _http://<IMAGE_URL_>

The next step shows that no volumes were created:

[ec2-user@ip-172-31-58-150 images]$ sudo podman volume ls

Now, let’s check the container with the inspect command to see what was mounted. Under the "Mounts" element, you should see /Thumbs mounted to EC2 host’s "/home/ec2-user/photo/thumb" folder, /Pictures mounted to "/home/ec2-user/photo/pictures" folder, and /config mounted to "/home/ec2-user/photo/config" folder.

[ec2-user@ip-172-31-58-150 images]$ sudo podman inspect photoshow

…….

…….

…….

      "Mounts": [

        {

            "Type": "bind",

            "Name": "",

            "Source": "/home/ec2-user/photo/thumb",

            "Destination": "/Thumbs",

            "Driver": "",

            "Mode": "",

            "Options": [

                "rbind"

            ],

            "RW": true,

            "Propagation": "rprivate"

        },

        {

            "Type": "bind",

            "Name": "",

            "Source": "/home/ec2-user/photo/config",

            "Destination": "/config",

            "Driver": "",

            "Mode": "",

            "Options": [

                "rbind"

            ],

            "RW": true,

            "Propagation": "rprivate"

        },

        {

            "Type": "bind",

            "Name": "",

            "Source": "/home/ec2-user/photo/pictures",

            "Destination": "/Pictures",

            "Driver": "",

            "Mode": "",

            "Options": [

                "rbind"

            ],

            "RW": true,

            "Propagation": "rprivate"

        }

    ],

…….

…….

At this point, we can log into the container and check the directories that we created:

[ec2-user@ip-172-31-58-150 ~]$ sudo podman exec -it photoshow /bin/bash

root@0d011645d26d:/# ls

And, we can check the images that we downloaded into /Pictures folder:

root@0d011645d26d:/# ls -l /Pictures /config /Thumbs

/Pictures:

total 2456

* rw-rw-r-- 1 abc users  70015 Apr  9 00:56 Argentina_WC.png
* rw-rw-r-- 1 abc users 173472 May 20 03:53 Brasil_WC.png
* rw-rw-r-- 1 abc users 879401 May 20 03:53 FIFA_World_Cup.jpg
* rw-rw-r-- 1 abc users  81582 Jul 17  2018 France_WC.png
* rw-rw-r-- 1 abc users 124180 May 20 03:53 Germany_WC.png
* rw-rw-r-- 1 abc users  84614 Jul 17  2018 Italia_WC.png
* rw-rw-r-- 1 abc users 126259 Sep 13  2019 Korea-Japan_WC.png
* rw-rw-r-- 1 abc users 157670 Jul 17  2018 Mexico_WC.png
* rw-rw-r-- 1 abc users 125000 May 20 03:53 Qatar_WC.png
* rw-rw-r-- 1 abc users 188832 May 20 03:53 Russia_WC.png
* rw-rw-r-- 1 abc users 248316 May 20 03:53 SouthAfrica_WC.png
* rw-rw-r-- 1 abc users 104383 May 19 10:36 Spain_WC.png
* rw-rw-r-- 1 abc users  98021 Jul 18  2018 USA_WC.png
* rw-rw-r-- 1 abc users  26622 Jul 18  2018 WGermany_WC.png

/Thumbs:

total 4

drwxr-x--- 2 abc users   67 May 20 03:54 Conf

drwxr-x--- 2 abc users 4096 May 20 04:13 Thumbs

/config:

total 0

drwxr-xr-x 2 abc users 38 May 20 01:16 keys

drwxr-xr-x 4 abc users 54 May 20 02:00 log

drwxrwxr-x 3 abc users 42 May 20 01:16 nginx

drwxr-xr-x 2 abc users 44 May 20 01:16 php

drwxrwxr-x 3 abc users 41 May 20 01:16 www

root@0d011645d26d:/#

Next, we can go to http://<EC2 Public IP>:8080 to check the photo gallery, as shown in the following image:

images showing up in an online gallery (logos for world cup and fifa events)

This result is cool, but is the data highly available? In other words: If the Amazon EC2 goes down, can we still access our images? In Step 2, we’ll look at how to address this.

Step 2: Run Podman container on RHEL EC2 instance with Amazon EFS file system (HA for storage)

In the previous scenario, our application was using the local filesystem. To make the data highly available, now we’ll use Amazon EFS to store our images. You will see how to set up Amazon EFS and use it with our application container running in Podman.

Create an Amazon EFS filesystem, called demo in this example.

screenshot of the "create file system" button with the new demo filesystem in EFS

Next, update the /etc/fstab with the Amazon EFS entry as shown in the following code example, and then mount the Amazon EFS filesystem on the Amazon EC2 host.

[ec2-user@ip-172-31-58-150 pictures]$ cat /etc/fstab

#

# /etc/fstab

# Created by anaconda on Sat Oct 31 05:00:52 2020

#

# Accessible filesystems, by reference, are maintained under '/dev/disk/'.

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.

#

# After editing this file, run 'systemctl daemon-reload' to update systemd

# units generated from this file.

#

UUID=949779ce-46aa-434e-8eb0-852514a5d69e /                   xfs defaults    0 0

fs-33656734.efs.us-west-2.amazonaws.com:/   /mnt/efs_drive nfs  defaults,vers=4.1  0 0

Mount the /mnt/efs_drive:

[ec2-user@ip-172-31-58-150 pictures]$ sudo mount /mnt/efs_drive

Verify the mount point with this command:

[ec2-user@ip-172-31-58-150 pictures]$ df -h

Filesystem                             Size  Used Avail Use% Mounted on

devtmpfs                               3.8G 0  3.8G   0% /dev

tmpfs                                  3.8G  168K  3.8G   1% /dev/shm

tmpfs                                  3.8G   17M  3.8G   1% /run

tmpfs                                  3.8G 0  3.8G   0% /sys/fs/cgroup

/dev/xvda2                              10G  9.6G  426M  96% /

tmpfs                                  777M   68K  777M   1% /run/user/1000

fs-33656734.efs.us-west-2.amazonaws.com:/  8.0E 0  8.0E   0% /mnt/efs_drive

Run the container using Podman:

[ec2-user@ip-172-31-58-150 ~]$ sudo podman run -d   --name=photoshow   -e PUID=1000   -e PGID=1000   -e TZ=Europe/London   -p 8080:80 --mount type=bind,source=/mnt/efs_drive/photo/config,destination=/config  --mount type=bind,source=/mnt/efs_drive/photo/pictures/,destination=/Pictures --mount type=bind,source=/mnt/efs_drive/photo/thumb/,destination=/Thumbs --restart unless-stopped   ghcr.io/linuxserver/photoshow

95c78443d893334c4d5538dc03761f828d5e7a59427c87ae364ab1e7f6d30e15

[ec2-user@ip-172-31-58-150 ~]$

[ec2-user@ip-172-31-58-150 ~]$ sudo podman ps -a

CONTAINER ID  IMAGE                      COMMAND  CREATED     STATUS         PORTS             NAMES

95c78443d893  ghcr.io/linuxserver/photoshow       18 seconds ago  Up 17 seconds ago  0.0.0.0:8080->80/tcp  photoshow

[ec2-user@ip-172-31-58-150 ~]$

At this point, you can inspect the container:

[ec2-user@ip-172-31-58-150 ~]$ sudo podman inspect photoshow

……………

……………

……………

    "Mounts": [

        {

            "Type": "bind",

            "Name": "",

            "Source": "/mnt/efs_drive/photo/config",

            "Destination": "/config",

            "Driver": "",

            "Mode": "",

            "Options": [

                "rbind"

            ],

            "RW": true,

            "Propagation": "rprivate"

        },

        {

            "Type": "bind",

            "Name": "",

            "Source": "/mnt/efs_drive/photo/pictures",

            "Destination": "/Pictures",

            "Driver": "",

            "Mode": "",

            "Options": [

                "rbind"

            ],

            "RW": true,

            "Propagation": "rprivate"

        },

        {

            "Type": "bind",

            "Name": "",

            "Source": "/mnt/efs_drive/photo/thumb",

            "Destination": "/Thumbs",

            "Driver": "",

            "Mode": "",

            "Options": [

                "rbind"

            ],

            "RW": true,

            "Propagation": "rprivate"

        }

    ],

……………

……………

…………...

This setup looks fine, but what if the Amazon EC2 instance goes down? We have data in an Amazon EFS filesystem, but how are the clients going to access it? This situation will be addressed in the next step.

Step 3: Add EC2 instance in second Availability Zone and an Application Load Balancer to distribute traffic (HA for compute)

For this step, we want to add high availability to both our compute and storage. Our storage is already highly available because of Amazon EFS, but now we’ll make the Amazon EC2 instance highly available, too.

To do this, we first create an Amazon Machine Image (AMI) of our running Amazon EC2 instance (as shown in the following figures) and bring a new Amazon EC2 instance in a different Availability Zone. Both of our instances will now access the same data that is stored on Amazon EFS.

screenshot of new AMI Name photoshow

Next, we’ll add an Application Load Balancer to distribute the client requests to the two Amazon EC2 instances in the two Availability Zones.

dashboard for create load balancer

screenshot of dashboard with photoshow with button to create load balancer and listener ID

The Application Load Balancer forwards the requests to the target group that includes the two EC2 instances hosting the application containers.

screenshot of target group 1

screenshot shoting two registered target groups: demo and demo-ha

Next, enter the DNS name of the load balancer with port 8080 in the web browser (photoshow-lb-207083175.us-west-2.elb.amazonaws.com:8080) to connect to the application.

screenshot showing logos that were seen in Figure 1

So far so good, but in the next step, we’ll look at what happens when our requests increase and we need additional resources to handle the client requests.

Step 4: Scale the solution using the Auto Scaling group (auto scaling added)

This is where automatic scaling comes into the picture. We added an Auto Scaling group called photoshow-asg with desired capacity of 1, minimum capacity of 1, and maximum capacity of 3 to handle any increase in the user requests.

We tested to confirm that the photo gallery could still be accessed from the URL and tested the scaling of the Amazon EC2 instances based on the load:

screenshot showing that the photo gallery could still be accessed from the URL and tested the scaling of the Amazon EC2 instances based on the load

This approach is ok, but we don’t want to give the DNS name of a load balancer to family and friends to check out photos. This is where Route 53 helps. We have a domain register with Route 53, and we’re going to use it to access the photo gallery.

To set this up, go to Route 53, Hosted Zone, <your registered domain>, and create a CNAME record type pointing to the load balancer DNS name:

screenshot with registered domain name blurred out

illustration of architecture described in this post: RHEL running Podman connected to Amazon EFS

Conclusion

In this post, we have described a highly available and scalable solution using Podman and Amazon EFS on Red Hat Enterprise Linux 8. This is a supported configuration as both RHEL 7 and 8 are supported with Amazon EFS. So, if there is an issue with RHEL/Podman, Red Hat would support it. If there is an issue with Amazon EFS, then AWS would be supporting the customer.

If you also want to simplify agile development with embedded continuous integration and nearly continuous deployment (CI/CD), add container catalog and image streams, or integrate your existing pipeline, then look into Red Hat OpenShift on AWS. You can choose between self-hosted Red Hat OpenShift Container Platform, the managed offering of Red Hat OpenShift Dedicated, Red Hat OpenShift Service on AWS (ROSA), or a mixture of these services that suit your organization’s needs to manage your Kubernetes clusters with one solution.

Mayur Shetty

Mayur Shetty

Mayur Shetty is a Principal Solution Architect with Red Hat’s Global Partners and Alliances Organization (GPA) working with AWS as a partner. He has been with Red Hat for more than five years, where he was also part of the OpenStack Tiger Team. Prior to Red Hat, he worked as a Senior Solution Architect driving solutions with the OpenStack Swift, Ceph, and other object storage software.

TAGS:
Vani Eswarappa

Vani Eswarappa

Vani Eswarappa is a Principal Architect at AWS with experiences in Containers, AI/ML and Enterprise Architecture. As technical leader , Vani works with AWS customers and partners on their cloud journey to meet business needs. In her spare time, she enjoys spending time with her family outdoors and exploring new locations.