AWS Database Blog

Securing Amazon RDS and Aurora PostgreSQL database access with IAM authentication

AWS provides two managed PostgreSQL options: Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL. Both support IAM authentication for managing access to your database. You can associate database users with IAM users and roles to manage user access to all databases from a single location, which avoids issues caused by permissions being out of sync on different RDS/Aurora instances.

This post walks through two common scenarios in which you can use IAM authentication: accessing the RDS/Aurora database from an application hosted in Amazon EC2 in the same AWS account, and accessing from a different account. The post also discusses best practices related to these scenarios.

This post uses the Aurora PostgreSQL environment, but the solution also works in RDS PostgreSQL. You can enable IAM authentication as part of cluster provisioning or modify cluster using CLI or console post cluster creation. IAM database authentication is available in PostgreSQL versions 9.6.9 and 10.4 or higher.

Prerequisites

Before getting started, complete the following prerequisites:

  • You need two AWS Accounts (referred as DB account and management account in this post).
  • Launch an Aurora PostgreSQL DB cluster or use an existing Aurora cluster in DB account with PostgreSQL 10.7.
  • Launch two EC2 instances, one each in AWS Account.
  • Install a psql client in EC2 and set up connectivity to your Aurora cluster.
  • Configure the AWS CLI with your credentials. You should have Administrator IAM privileges to both AWS accounts. You can also use the AWS managed policy Administrator.

This post assumes that you are familiar with working with RDS/Aurora PostgreSQL and EC2 environments.

Setting up the environment

To set up your environment, complete the following steps:

  1. Log in to the EC2 instance in DB account. Use the psql client connect to an Aurora PostgreSQL cluster and install the example schema pgdemo. See the following command:
    create database demo; 
    create schema pgdemo;
    set search_path='pgdemo';
    
    CREATE TABLE products (
        product_no integer PRIMARY KEY,
        name text,
        price numeric
    );
    
    CREATE TABLE orders (
        order_id integer PRIMARY KEY,
        product_no integer references products,
        quantity integer,
        order_date timestamp(0) without time zone
    );
    
    
    insert into products values( 1, 'Apple', 1000);
    insert into products values( 2, 'Samsung', 800);
    insert into products values( 3, 'Lenovo', 500);
    insert into products values( 4, 'Oneplus', 800);
    insert into products values( 5, 'Xiaomi', 300);
    
    insert into orders
     SELECT s, floor(random() * 5 + 1)::int ,
     floor(random() * 10 + 1)::int ,
      (timestamp '2019-09-20 10:00:00' + 
      random() * (timestamp '2019-09-22 20:00:00' - timestamp '2019-09-20 10:00:00'))::timestamp(0) without time zone
     from generate_series(1,20) as s ;
    commit; 
  2. Set up database roles for access control.
    PostgreSQL lets you grant permissions directly to the database users. However, as a best practice, follow role-based access control with the least privilege model by creating roles with a specific set of permissions based on your application requirements. For more information, see Managing PostgreSQL users and roles to learn how to configure users, roles, and groups in PostgreSQL databases. This post creates the following roles:
    oltp_ro_role – Has read-only permissions, such as select privilege on the tables
    oltp_rw_role – Has select, delete, insert, and update privileges on the tables
    To create both roles, enter the following command:

    -- Create oltp_ro_role
    
    create role oltp_ro_role;
    grant usage on schema pgdemo to oltp_ro_role;
    grant select on all tables in schema pgdemo to oltp_ro_role;
    
    -- Create oltp_rw_role 
    create role oltp_rw_role;
    grant usage on schema pgdemo to oltp_rw_role;
    grant select,update,delete,insert on all tables in schema pgdemo to oltp_rw_role;
    
  3. Enable IAM authentication using the following CLI command, if not enabled already:
    aws rds modify-db-cluster \
    --db-cluster-identifier <cluster-identifier> \
    --apply-immediately \
    --enable-iam-database-authentication
    

    Replace <cluster-identifier> with your actual cluster identifier.

  4. Enter the following CLI command to verify that RDS IAM authentication is enabled:
    aws rds describe-db-clusters --db-cluster-identifier <cluster-identifier>  --query 'DBClusters[].[IAMDatabaseAuthenticationEnabled]' --output table

    It should return a true output.

  1. Create database users and provide access for IAM authentication using the role rds_iam and IAM policy. Enter the following command:
    -- Create Read-only (RO) user and grant read-only role
    create user oltp_ro_user with login;
    grant rds_iam to oltp_ro_user;
    grant oltp_ro_role to oltp_ro_user;
    -- Create Read-write(RW) user and grant read-write role
    create user oltp_rw_user with login;
    grant rds_iam TO oltp_rw_user;
    grant oltp_rw_role to oltp_rw_user;
  1. After creating the database users, create an IAM policy to grant connect access to the database users created in Step 5. Create the file oltp_rw_user.json with the following code:
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Action": [
                 "rds-db:connect"
             ],
             "Resource": [
    "arn:aws:rds-db:<AWS Region>:<AWS Account ID>:dbuser:<DbClusterResourceId>/oltp_rw_user"
             ]
          }
       ]
    }

    Resource parameters are in the following format:
    arn:aws:rds-db:region:account-id:dbuser: DbClusterResourceId/db-user-name
    Replace region:account-id and DbClusterResourceId/db-user-name with appropriate value. You can find the resource identifier using the console or the following CLI command:

    aws rds describe-db-clusters --db-cluster-identifier <cluster-identifier> --query "DBClusters[0].[DbClusterResourceId]"	
  1. When the policy document is ready, create the IAM policy using the following code:
    aws iam create-policy --policy-name pgdemo-oltp-rw-user-policy --policy-document file://oltp_rw_user.json  
  1. Using the previous code as a template, create the policy document for oltp_ro_user.

You have set up your environment and are ready to test the IAM authentication feature.

Accessing the database from an application hosted in EC2 in the same account

A typical production environment includes multiple applications connecting to a single Aurora database cluster for use cases, such as OLTP, batch jobs, and reporting. These applications possess distinct security, operational, and performance requirements. Therefore, they are generally hosted on their own EC2 instances. In this example, you can manage the database access via IAM roles. Using this concept, you can also lock down access to a particular database environment. For example, an application hosted on the EC2 instance (App1) can connect only to a specific database instance, prod-db1.

Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. For more information, see IAM Roles for Amazon EC2.

To achieve this functionality, this post uses EC2 instance profiles. For more information, see Using Instance Profiles.

The following diagram depicts the high-level architecture of this solution, in which you have an OLTP application that needs read/write access to the demo database in Aurora cluster. The OLTP application is hosted on an EC2 Auto Scaling group with instance role mapped to IAM role pgdemo_oltp_rw_role. The reporting application runs on another EC2 Auto Scaling group with instance role pgdemo_oltp_ro_role.

For illustration of this architecture, we will assume a client application such as psql is installed in EC2 instance and it requires a read and write access to the database. To grant this application access, complete the following steps in DB Account:

  1. Create the following trust policy and save it in a text file named ec2-role-trust-policy.json:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": { "Service": "ec2.amazonaws.com"},
          "Action": "sts:AssumeRole"
        }
      ]
    } 
  1. Create an IAM role pgdemo-oltp-rw-role and attach the policy pgdemo-oltp-rw-user-policy to the role. See the following command:
    #Create IAM role
    aws iam create-role --role-name pgdemo-oltp-rw-role --assume-role-policy-document file://ec2-role-trust-policy.json
    #Attach the iam policy to AWS EC2 instance role
    aws iam attach-role-policy --role-name pgdemo-oltp-rw-role --policy-arn arn:aws:iam::<AWS Account id>:policy/pgdemo-oltp-rw-user-policy
  1. Attach this IAM role to the EC2 instance with the following command:
    # Create Instance profile
    aws iam create-instance-profile --instance-profile-name pgdemo-db-access-profile
    # Attach the IAM role to this instance profile
    aws iam add-role-to-instance-profile --instance-profile-name pgdemo-db-access-profile --role-name pgdemo-oltp-rw-role
    # Attach the Instance profile to the EC2 instance
    aws ec2 associate-iam-instance-profile --instance-id <EC2 Instance id> --iam-instance-profile Name="pgdemo-db-access-profile"
  1. Log in to the EC2 instance and generate temporary credentials using the aws rds generate-db-auth-tokenAPI. See the following command:
    export RDSHOST="<Aurora Cluster Endpoint>"
    export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region <AWS Region><--username oltp_rw_user )"
    

    Note: You can view the generated credentials token using echo $PGPASSWORD command. See IAMDBAuth.Connecting.AWSCLI.PostgreSQL for more information.

  1. Use the psql utility to connect to the Aurora cluster using the generated credentials.
    psql -h $RDSHOST -p 5432 -d demo -U oltp_rw_user
    -- Verify the database user
    select current_user;
     current_user
    --------------
     oltp_rw_user
    -- Let’s insert a row in products table
    demo=> insert into pgdemo.products values(6,'Nokia',600);
    INSERT 0 1

With IAM authentication, an application such as psql can log in to the database and perform DML actions based on the database level role mapping. You can extend the same procedure to provide read-only access to the reporting applications.

Accessing the database from an application hosted in EC2 in a different account

This next example assumes a multi-account scenario of DBAs and IT operators centrally managing the RDS/Aurora databases in a DB account and the security team hosting tools in another account, such as the management account. The security team needs access to all databases for audit and monitoring purposes. You can use IAM authentication instead of provisioning a database user/role in every database in the DB account.

DBAs can deploy the least privilege model to provide access from the DB account via the IAM cross-account role feature.

The following diagram depicts the high-level architecture of this solution. It consists of a standard role, named pg-mgmt-role, in the management account that is associated with EC2 instance via instance profile option. All the databases in DB account that must be monitored will have a database user pgmonitor configured with necessary privileges.  We will set up a pg-monitor-role in DB account to connect to Aurora clusters using IAM database authentication.

To build this solution, complete the following steps:

Steps to be Performed in Management AWS Account

  1. Create the following trust policy and save it in a text file named ec2-role-trust-policy.json:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": { "Service": "ec2.amazonaws.com"},
          "Action": "sts:AssumeRole"
        }
      ]
    } 
  1. Create an IAM role pg-mgmt-role with the EC2 trust policy, which is used by the tools and applications to connect to various databases hosted in the DB account. See the following code:
    aws iam create-role --role-name pg-mgmt-role --assume-role-policy-document file://ec2-role-trust-policy.json
  1. Create a policy document pg-mgmt-policy.json, which allows permissions to assume the role pg-monitor-role in the database account being monitored. See the following code:
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "arn:aws:iam::<DB AWS Account ID>:role/pg-monitor-role"
      }
    }
  1. Create the policy and attach it to pg-mgmt-role in management account. See the following code:
    # create IAM policy 
    aws iam create-policy --policy-name pg-mgmt-policy --policy-document file://pg-mgmt-policy.json  
    # Attach the iam policy to the pg-mgmt-role
    aws iam attach-role-policy --role-name pg-mgmt-role --policy-arn arn:aws:iam::<Management AWS Account id>:policy/pg-mgmt-policy
  1. Make sure that the pg-mgmt-role IAM role is attached to the EC2 instance via EC2 instance profile.
    aws iam create-instance-profile --instance-profile-name pgmonitor-access-profile
    aws iam add-role-to-instance-profile --instance-profile-name pgmonitor-access-profile --role-name pg-mgmt-role
    aws ec2 associate-iam-instance-profile --instance-id <EC2 instance id> --iam-instance-profile Name="pgmonitor-access-profile"

Steps to be performed in DB AWS Account

  1. Create a database user with name pgmonitor in all Aurora clusters with appropriate permissions for IAM authentication and monitoring. See the following command:
    create user pgmonitor with login;
    grant rds_iam to pgmonitor;
    
    -- Below role is available starting PostgreSQL 10
    grant pg_monitor to pgmonitor;
  1. Create the policy document mgmt-account-trust-policy.json with the following code:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<Management AWS Account ID>:role/pg-mgmt-role"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  1. Create the IAM role pg-monitor-role with the role trust policy document mgmt-account-trust-policy.json. See the following command:
    aws iam create-role --role-name pg-monitor-role --assume-role-policy-document file://mgmt-account-trust-policy.json
  1. Now, we must create an IAM policy to provide access to the Aurora clusters. Create the policy pg-db-monitor-policy using the following code:
    {
       "Version": "2012-10-17",
       "Statement": [
          {
             "Effect": "Allow",
             "Action": [
                 "rds-db:connect"
             ],
             "Resource": [
                 "arn:aws:rds-db: <AWS Region>:<DB AWS Account ID>:dbuser:*/pgmonitor"
             ]
          }
       ]
    }

    Please note in this case, the DBA’s are providing access to all the Aurora clusters for a particular AWS account and AWS Region. If you wish to restrict to a particular database cluster, then specify DBClusterResourceId in Resource ARN: "arn:aws:rds-db:<AWS Region>:<AWS Account ID>:dbuser:<DbClusterResourceId>/pgmonitor"

    aws iam create-policy --policy-name pg-db-monitor-policy --policy-document file://pg-db-monitor-policy.json
  1. Attach this policy to the IAM role pg-monitor-role. See the following command:
    aws iam attach-role-policy --role-name pg-monitor-role --policy-arn arn:aws:iam::<AWS Account id>:policy/pg-db-monitor-policy
  1. After creating the roles, make sure that the EC2 instances in the management account can connect to the Aurora cluster by whitelisting their IP in the destination security group. For more information, see Controlling Access with Security Groups. Alternatively, if you have configured VPC peering, make sure to configure the route and security groups appropriately. For more information, see A DB Instance in a VPC Accessed by an EC2 Instance in a Different VPC.

You have now set up all the roles and network access. However, you must configure the management application to connect to the databases. For this post, assume that this application is running on an EC2 instance. During role setup, you gave permission to pg-mgmt-role to assume pg-monitor-role in the database account and set up a trust relationship. If you are using SDK, you usually make an STS call to get the target role’s credentials and use them to perform your work. EC2 makes it easier to assume roles using instance profiles, as described in the first example.

Steps to be performed in Management AWS Account

  1. Log in to an EC2 instance in the management account, and create a new profile for the role in the .aws/configfile. The following command example creates an instancecrossaccount profile that switches to the role pg-monitor-role in the DB account.
    $cat .aws/config
    [profile instancecrossaccount]
    role_arn = arn:aws:iam::<DB account>:role/pg-monitor-role
    credential_source = Ec2InstanceMetadata
  1. We have now completed the configuration. Next, we will connect to the Aurora cluster running in DB account from the EC2 machine.
    export RDSHOST="<Aurora Cluster Endpoint>"
    export PGPASSWORD="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 5432 --region <AWS Region> --username pgmonitor --profile instancecrossaccount)"
    
    $psql -h $RDSHOST -p 5432 -d demo -U pgmonitor
    

    When this profile is invoked, the AWS CLI uses the credentials of the EC2 instance profile metadata to request credentials for pg-monitor-role.
    .
    You can now query pg_stat_activity to identify the current activities in the database every 10 seconds by running the following query:

    SELECT 
        pid
        ,datname
        ,usename
        ,application_name
        ,client_hostname
        ,client_port
        ,backend_start
        ,query_start
        ,query
        ,state
    FROM pg_stat_activity
    WHERE state = 'active'  \watch 10;

    The examples in this post leveraged an EC2 client. You can also use IAM authentication via serverless applications such as AWS Lambda. For more information, see IAM role-based authentication to Amazon Aurora from serverless applications.

Operational best practices

It is strongly recommended to implement IAM policies and roles using best practices. For more information, see IAM Best Practices.

You can monitor the database login connections via IAM by enabling log_connections (set to 1) in the DB instance parameter group. The following code is an output example from the logs:

2019-09-29 10:08:45 UTC:ec2-x.x.us-east-2.compute.amazonaws.com(35608):pgmonitor@pgbench:[24601]:LOG: connection authorized: user=pgmonitor database=pgbench SSL enabled (protocol=TLSv1.2, cipher=ECDHE-RSA-AES256-GCM-SHA384, bits=256, compression=off)

Failed login attempts report an error similar to the following:

2019-09-29 10:13:24 UTC:ec2-x.x.us-east-2.compute.amazonaws.com(35620):pgmonitor@pgbench:[29443]:LOG: pam_authenticate failed: Permission denied
2019-09-29 10:13:24 UTC:ec2-x.x.us-east-2.compute.amazonaws.com(35620):pgmonitor@pgbench:[29443]:FATAL: PAM authentication failed for user "pgmonitor"

For cross-account access monitoring, you can also identify the source of database connections with AWS CloudTrail logs. This is possible because the management account uses the AWS STS AssumeRole API. Make sure CloudTrail logs are enabled in the DB account. For more information, see Querying AWS CloudTrail Logs.

You can also query the logs via Amazon Athena. For example, you can identify details using the following query:

SELECT a.useridentity.accountid as "source_account",a.eventtime,a.eventsource,a.eventname,a.awsregion,a.sourceipaddress, names.arn  FROM "default"."cloudtrail_logs" a cross join unnest(resources) as t(names) where names.arn='arn:aws:iam::<DBaccountID>:role/pg-monitor-role'  order by a.eventtime limit 10;

Replace <DBaccountID> with the account being monitored.

The following screenshot shows that you received two requests from the accounts ending in 4934 and 1026. It’s also providing sourceipaddress, which helps to identify the source.

DBAs and IT operators should make sure that the IAM authentication access policy and database users are synchronized appropriately for each cluster. For example, if you remove or modify a database user, make sure to update the corresponding IAM policies. Also, periodically review the access permissions via IAM access advisor APIs. For example, if there is no activity from a particular IAM role or user, you can consider revoking rds-db:connect access. For more information, see Automate analyzing your permissions using IAM access advisor APIs.

Summary

This post discussed possible scenarios in which you can use IAM authentication to manage database access and best practices related to monitoring. You can use IAM authentication to centrally manage access to database resources, instead of managing access individually on each DB cluster. As always, AWS welcomes feedback, so please leave comments or questions.

 

 


About the Author

 

Gowri Balasubramanian is a Principal Database Solutions Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on both relational and NoSQL database services, helping them improve the value of their solutions when using AWS.

 

 

 

Amit Bansal is a Senior Consultant with Professional Services team at Amazon Web Services. He focuses on database migrations to AWS and works with customers to design and implement AWS RDS, Aurora and Redshift architectures.