AWS Database Blog
Generate AWR reports for Amazon RDS for Oracle read replicas
Oracle database administrators rely on tools such as Oracle’s Automatic Workload Repository (AWR) report to identify and resolve issues that are occurring in a database. Amazon Relational Database Service (Amazon RDS) for Oracle allows you to create read replicas to offload read-only workloads. Because the instances are read only, you can’t generate AWR reports to investigate issues such as slow response. As a result, DBAs are left in the dark as to how to respond to users on issues of slow response at certain periods of the day.
In this post, we demonstrate a way to generate AWR reports on a read replica and identify the user or SQL that is impacting performance on the read replica.
Solution overview
This solution is based on Oracle’s Remote Management Framework (RMF). The RMF topology provides us with a centralized architecture wherein we can add all the participating database nodes (RDS for Oracle primary and read replica in our case). The nodes in the topology are designated as destinations that store the data and sources from which data is collected. In this post, the RDS for Oracle primary will be the destination and the read replica will be the source. Connections between the destination and the source will be via a public database link owned by the SYS$UMF user.
The RMF topology is a centralized architecture that consists of all the participating database nodes along with their metadata and connection information. The RMF topology has one database node, called the destination, which is responsible for storing and managing performance data (AWR data) that is collected from the database nodes, called sources. A candidate destination is a source that can be configured in such way that it can replace the original destination when the original destination is unavailable or is downgraded. A topology can have only one destination, and one or more candidate destinations. The AWR reports will be run as a service on this topology allowing us to collect AWR reports from the source (read replica) on the destination (primary) database.
The architecture is as follows.
Within the AWS Cloud, we have an AWS Region. To isolate assets in the cloud, we deploy them in an Amazon virtual private cloud (VPC). To have high availability and the ability to offload read operations, we deploy the primary instance in one Availability Zone and the read replica in another Availability Zone in the same Region. The read replica functionality is based on Oracle Active Data Guard.
We then create an RMF topology consisting of the primary and read replicas.
To test this solution, we create an RDS for Oracle primary database and a read replica. We also configure the SYS$UMF user (refer to the Oracle documentation on the DBMS_UMF package), create database links between the primary and read replica database, and create the RMF configuration. With these resources, we generate some read load on the read replica and then generate an AWR report. The steps to create the environment are documented in the following sections.
Prerequisites
The following prerequisites are required to utilize this solution:
- Access to the AWS Management Console.
- Familiarity with Amazon RDS for Oracle.
- Familiarity with Oracle databases and Automatic Workload Repository (AWR).
- A bastion (or other) host with SQLDeveloper or any other similar GUI tool. If no GUI tool is available, the commands can be executed via a command line. The instructions to deploy these can be found at the following links:
- The solution requires a public database link which could be a security issue.
- This solution was tested with Amazon RDS for Oracle Enterprise Edition 19.0.
- This solution utilizes AWR reports which are part of Oracle Diagnostics Pack; a licensed product
Create a primary database with a read replica
In this first step, we create an RDS for Oracle primary database with a read replica and confirm connectivity.
- On the Amazon RDS console, create a new custom option group with Amazon Simple Storage Service (Amazon S3) integration.
- Create a primary database via the Amazon RDS console, utilizing the newly created option group created.
- After the primary database has been created, create a read replica in a different Availability Zone.
- Confirm connectivity to both the primary and read replica. Connect to both databases via the bastion using either a GUI tool like SQLDeveloper or the command line. See Connecting to your RDS for Oracle DB instance.
- Get the host names of both databases with the following SQL and note them down:
We use this later to confirm connectivity over the database links.
Configure the SYS$UMF user
Now we configure the SYS$UMF user, which will be used by the database links later. The SYS$UMF user is the default database user that has all the privileges to access the system-level RMF views and tables. All the AWR related operations in RMF can be performed only by the SYS$UMF user. The SYS$UMF user is locked by default and it must be unlocked before deploying the RMF topology.
- On the primary, check the status of the SYS$UMF user with the following SQL:
- Unlock the user (if required):
The password used in this code is for demonstration purposes only. You should choose a secure password.
This user is typically locked. If it is open, skip the step to unlock the user.
- Confirm that the user is unlocked with the following SQL:
- At this point, the SYS$UMF user should be open and unlocked, and you should have the password for the user.
Create database links
Next, we create a database link between the primary and read replica databases using the SYS$UMF user.
- On the primary, create a database link to the read replica with the following SQL. Change the SYS$UMF user’s password, the database host, port, and SID to match your environment:
- Confirm that the database link has been created:
- Test connectivity to the read replica via the database link:
- Create a database link to the primary with the following SQL. Change the SYS$UMF user’s password, the database host, port, and SID to match your environment:
- On the read replica, confirm that the database link has been created:
- Test connectivity to the read replica via the database link:
Create the RMF configuration
To create the RMF configuration, complete the following steps:
- On the primary, configure the primary node with the following SQL:
- On the read replica, configure the read replica node:
- On the primary, create the topology:
- Check the topology:
- To register the read replica, run the following SQL on the primary:
- Confirm the status of the configuration:
- Register the read replica to AWR on the primary:
- Confirm the configuration:
You should see an entry from the primary to the replica and another from the replica to the primary
Create a directory for AWR reports
Now we create a directory for the AWR reports. Because these databases are hosted on Amazon RDS for Oracle we don’t have access to the underlying file structure. Therefore, we use the AWS rdsadmin utility.
- Create a directory to store AWS reports on the primary with the following SQL:
- Confirm that the directory was created on the primary:
Configure the snapshot time on both servers
Next, we configure the snapshot time intervals on both the primary and the read replica.
- To check the AWR snapshot duration, run the following SQL on the primary:
The typical AWR snapshot interval is 1 hour.
- For testing purposes, reduce the snapshot time to 15 minutes by executing the following SQL for each of the database ids in the preceding SQL:
The database id in the preceding SQL is for demonstration purposes. You should change the value to the database id of your database.
- Confirm that the snapshots are being taken on both sides:
You should see two entries (one for the primary and one for the read replica) created every 15 minutes.
- In the event that you have a problem that is currently occurring, you can run the following command to take a manual snapshot:
The name of the read replica (
‘aws_replica’
) in the preceding SQL is for demonstration purposes. You should change the value to the corresponding read replica that is of concern. This value can be ascertained with the following SQL:
Generate a synthetic workload on the read replica
Because the read replica is open only for read-only traffic, we need to generate a synthetic workload based on read (for example, SELECT) traffic.
If you don’t have objects that can be used to generate a load on the database, you can use the following SQL to generate activity. The SQL is based on the standard dictionary objects that should be present in any Oracle database.
For ease of identification, run the load such that it falls within the interval of one AWR report.
Use the following sample SQL to generate load:
Generate the AWR report
Now that we have created and configured the environment and generated a synthetic workload, we generate the AWR report for the time period of the synthetic workload. In a real-world scenario, you should generate the report for the time period that users of the system have complained about slow response.
- Log on to the read replica that you wish to run the AWR report on and execute the following command to confirm you are on the correct read replica. This is especially important if you have multiple read replicas.
For Example:
- After you have confirmed that you are on the correct read replica, get the database id by executing the following SQL:
For Example:
- Log on to the primary, populate the database id from the previous SQL and execute the following SQL:
- After you have identified the identifiers of the snapshots, run the following SQL on the primary to generate the AWR report. Again, because these databases are hosted on Amazon RDS for Oracle, we don’t have access to the underlying file structure, so we will use the AWS rdsadmin utility:
“AWR_REPORTS” refers to the directory we created earlier to store the AWR reports. The start and stop AWR snapshot IDs as well as the database id (last parameter) are for demonstration purposes only. You should use the values that correspond to the period that you are interested in from your database.
- On the primary, list the file that contains the AWR report:
- Create an SQL file to run the following commands. In this post, the file is called gen_awr.sql and is referenced on the command line.
The contents of gen_awr.sql are as follows: - Run the gen_awr.sql file from the SQL command line:
- Transfer the output (named
awrrpt_73_74.html
in this example) file using a client and view it in a browser. - Confirm that you have generated the AWR report for the correct read replica by checking the information at the top of the AWR report. For example:
- Review the AWR report to identify the SQL that was causing the poor performance reported by the users.
Conclusion
Read only replicas are a very effective method of offloading read intensive workloads from your primary database. However, like any database, read only databases also pose performance challenges. The read only nature of the database prevents a database administrator from directly generating AWR reports to analyze such problems. In this post, we showed you a solution that allows a database administrator to overcome this issue and respond to performance issues with a familiar tool.
If you have any questions or suggestions, leave a comment.
About the Authors
Dean Capps is a Database Consultant at AWS and has experience providing database support on legacy, open systems, and cloud platforms. Dean is currently a member of the AWS Global Competency Centre, providing homogeneous and heterogenous database migration support to commercial and public sector clients.
Mohammad Waseem is a Database Consultant at AWS and works as a database migration specialist, helping and enabling customers to build highly available, cost-effective database solutions and migrate their database workloads to the AWS Cloud. Waseem works with AWS customers and partners to provide guidance and technical assistance on database projects, helping them improve the value of their solutions when using AWS.
Feng Cai is a Database Consultant at AWS. He has a wide background in providing database services to clients. Feng is currently a member of the AWS Global Competency Centre, providing homogeneous and heterogenous database migration support to customers.
Wajid Ali Mir is a Database Consultant at AWS. He has extensive database experience with customers ranging from banking, telecom, and cloud consultancy. Wajid is currently a member of the AWS Global Competency Centre, providing homogeneous and heterogenous database migration support to commercial clients.