AWS Database Blog

Introducing Group Replication plugin for active/active replication on Amazon RDS for MySQL

Amazon Relational Database Service (Amazon RDS) for MySQL now supports the Group Replication plugin for active/active replication. You can use the plugin to set up active/active replication between your RDS for MySQL database instances to provide continuous availability for your applications. Writing to two or more database instances in an active/active cluster can help maximize availability and reduce write latency. Using the Group Replication plugin, you can configure active/active replication across up to nine Amazon RDS for MySQL instances. The Group Replication plugin is available on Amazon RDS for MySQL 8.0.35 and higher versions.

In this post, we discuss MySQL Group Replication, its use cases, configuration, application-level considerations, and significance in modern database management.

Introduction to Group Replication

The MySQL Group Replication plugin is a community plugin built on an implementation of the Paxos distributed algorithm and allows up to nine database (DB) instances to be configured in an active/active setup. For any transaction to commit, a majority of the DB instances in the ‘group’ must agree on the order of a given transaction in the global sequence of transactions.

An active/active cluster set up using the Group Replication plugin on RDS for MySQL instances uses group replication in the Multi-Primary mode. In a multi-primary mode, all instances that are a part of the cluster are set to read-write mode when joining the cluster. The multi-primary mode provides virtually synchronous replication, with built-in conflict detection/handling and consistency guarantees. If you configure Group Replication on an instance, but it is unable to join the cluster successfully, the instance is automatically set to read only mode. For more information, refer to Group Replication in the MySQL user guide.

The following image illustrates the Group Replication architecture:

Use cases for active/active replication using Group Replication plugin

In this section, we discuss some common use cases of active/active replication.

Continuous availability – In today’s fast-paced and highly competitive business landscape, organizations depend on databases and data systems to ensure uninterrupted access to mission-critical applications and services. Setting up active/active replication using Group Replication plugin enables businesses to maintain data consistency and high availability across multiple database instances. By configuring active/active replication across multiple database instances, and distributing read and write operations across them, enterprises can ensure their data remains accessible on one or more DB instances in case of hardware failures, network connectivity issues, or planned maintenance events. Such a cluster topology not only enhances fault tolerance but also provides load balancing capabilities ensuring even distribution of workloads among the active/active instances. Note that if one instance of the group becomes unavailable, you must redirect connected clients to a different available DB instances in the active/active cluster using a connector, load balancer, router, or middleware. Consequently, businesses can achieve continuous availability, minimize downtime, and deliver seamless user experience to delight end-customers.

Follow-the-sun model – Active/Active replication using Group Replication is often used for implementing a ‘follow-the-sun’ model for applications. The Group Replication plugin has data replication consistency and a flow control mechanism to ensure consistent copies are available to all DB instances in your active/active cluster. The application can dynamically shift the active primary instance to wherever it is currently daytime, optimizing write performance during peak periods. Further, the reads can be load-balanced across the remaining DB instances to optimize the read latency.

Best practices and key considerations for an active/active cluster using Group Replication plugin

In this section, we discuss best practices for an active/active cluster using the Group replication plugin.

Number of DB Instances – Choose the number of DB instance in your active/active cluster based on your workload and high availability requirements. We recommend an odd number of DB instances, so that a majority can be established, even in case of a network partition. For example, in a 5-node active/active cluster, if there is a network partition splitting it into 3 and 2 node groups, then writes can continue to the 3-node group, as a majority quorum can still be established. However, on a 6-node cluster if the cluster is split into 3 and 3 nodes, a quorum cannot be established.

Read/Write splitting – To optimize application performance in an active/active cluster, you can split the read and writes across the DB instances in your cluster. Assess your application’s transaction patterns and route read requests to nearby DB instances to reduce latency. This approach results in enhanced throughput and reduced latency. By continuously monitoring and adjusting, your application can adapt to changing workload patterns and load balance effectively.

Connection pooling – Implement connection pooling to efficiently manage database connections and reduce the overhead of establishing new connections for every request.

Error handling – Your application must implement robust error handling when working with active/active replication. Connection errors or failed transactions can occur due to network issues, or temporary outages. You can gracefully handle any transient errors by implementing retry logic and exponential backoff at the application layer. For distributed concurrency issues like deadlocks, automate retries in your application code. Actively monitor errors like certification failures from lag, and alert on them for investigation. Watch for warning-level errors in logs during rejoin attempts. Well-rounded error handling helps maintain continuous availability for your applications and proactively surfaces serious errors.

Transaction size – Measure your application’s transaction size distribution and patterns before setting the group_replication_transaction_size_limit parameter. This limit controls transaction size that can help reduce replication lag. The default value for this parameter is 150,000,000 bytes. OLTP applications that generate small transactions that fit under this limit, are able to avoid latency spikes. However, workloads that modify or insert large batches may require larger transactions. Setting the limit too low, or leaving it at default, can cause critical application transactions to fail unexpectedly. At the same time, a limit too high may cause transaction backlogs on replicas and out of memory issues. Note, if your chosen instance class does not have enough available memory, it is recommended to batch the data modifications into smaller batches, instead of increasing this variable. This limit must be configured consistently across all DB Instances in the active/active cluster. Proactively tuning this parameter is key for optimizing your application’s performance and replication throughput.

Cascading foreign key constraints – In an active/active cluster using Group Replication, the group_replication_enforce_update_everywhere_checks parameter is set to ON by default to prevent data inconsistencies.

It is important to note that when the value for this variable is set to ON, applications could fail to perform DMLs on tables with cascading foreign key constraints. While disabling this parameter may allow more flexibility in data modifications, it could also introduce data inconsistencies if not managed properly. When considering the use of cascading foreign key constraints in an active/active cluster, you should take appropriate precautions to maintain data integrity and consistency within your application. If you plan to set this parameter to OFF, we recommend that you perform conflict management within your application, for example, avoid running transactions against the same table from multiple DB instances. Testing and understanding the behavior of your application under different configurations and parameter settings is essential for managing active/active clusters with complex data relationships.

Failure detection parameters – There are three failure detection parameters that you must adjust to define the failure tolerance for your application:

a) Group_replication_member_expel_timeout: Instances in an active/active cluster using Group Replication plugin may face temporary network issues or go unresponsive due to resource bottlenecks, causing a DB instance in the cluster to temporarily lose communication with other DB instances of the cluster. It may take a few seconds for the cluster to identify the unresponsive DB instance. When identified, a ‘suspicion’ is created. Following the suspicion, the DB instance is expelled from the group after 5 seconds by default. You can configure the expulsion timeout by modifying group_replication_member_expel_timeout parameter. Increasing the group_replication_member_expel_timeout allows unresponsive DB instances an opportunity to recover before being expelled from the ‘group’. Opting for a longer timeout can benefit DB instances that experience brief network interruptions or temporary unresponsiveness. However, increasing this timeout to a very large value may result in delayed detection of failed DB instances that could block writes even when a majority quorum is established within the remaining healthy DB instances.

b) Group_Replication_AutoRejoin_Tries: When an instance is expelled from an active/active cluster, the DB instance tries to rejoin the ‘group’ up to 3 times by default. There is a 5-minute interval between successive retries to join the group. Depending on your fault tolerance, you can increase or decrease the number of retry attempts by adjusting the group_replication_autorejoin_tries parameter. If the DB instance fails to join the group after the defined number of attempts, you must manually restart the group replication.

If a DB instance in your cluster occasionally face intermittent connectivity issues, and you prefer offering more automatic rejoin opportunities and limiting manual intervention, consider increasing this parameter. Decrease this value if you wish to restrict rejoin attempts, or expedite identifying DB instances that require manual intervention.

During and between auto-rejoin attempts, a DB instance remains in read-only mode and does not accept writes, with an increasing likelihood of stale reads over time. If your applications cannot tolerate the possibility of stale reads for any period of time, configure group_replication_autorejoin_tries to 0.

c) Group_Replication_Unreachable_Majority_Timeout: In some cases, the network partition can divide the cluster by isolating the DB instances into smaller groups. For example, a 7 Instance active/active cluster requires at least 4 instances to be responsive for a quorum. If a network partition divides the cluster into groups of 4 and 3 instances, the partition with 3 instances cannot achieve quorum. The group_replication_unreachable_majority_timeout defines the number of seconds for which DB instances that suffer a network partition and cannot connect to the majority will wait before leaving the group. This variable is set to 0 by default which means that the minority will wait forever to leave the group. To prevent your application from running into this condition, you may set the parameter to an appropriate value greater than 0. When the defined timeout is reached, all pending transactions on the minority are rolled back, and the DB instances in the minority partition are moved to the ERROR state. From there, your application can perform error-handling as needed.

Flow control – With Group Replication plugin, a transaction is only considered committed after the majority of the group’s instances (n/2 + 1, where n is the number of instances in the cluster) have agreed on the order of all the transactions that happened. This works well when the group is working with a moderate number of transactions. When there are a large number of transactions, the slower DB instances in the cluster may fall behind. This can lead to stale data being returned when you read from the slower instances. To address this, Group Replication has a flow control mechanism that places a limit on the amount of backlog that DB instances can accumulate, both in terms of transactions to be certified as well as transactions to be applied. You can use group_replication_flow_control_applier_threshold and group_replication_flow_control_certifier_threshold parameters (25,000 transactions by default) to define the acceptable replication performance and lag, based on your application’s workload patterns including transaction sizes, peaks, and overall throughput needs. Whenever these limits are exceeded, and for as long as they continue to be surpassed, the flow control mechanism restricts the throughput of all writer DB instances to match the capacity of the slowest DB instance within the cluster.

You can disable flow control by setting group_replication_flow_control_mode to DISABLED. While disabling flow control may improve performance, it also increases the memory footprint of the process which can lead to Out of Memory conditions. Therefore, it is not recommended to disable flow control, unless your workload traffic pattern can support it and there is enough FreeableMemory available on the instance.

You can also configure replica parallel workers, to handle your application throughput demands, so that throttling occurs at reasonable lag levels. Continuously monitor replication delay metrics, re-balancing settings as your workload profile shifts. The optimal combination of parameter values will manage lag while maximizing performance across the distributed topology for your application.

Transaction consistency – You can choose from one of four consistency levels for your active/active cluster using Group Replication: EVENTUAL, BEFORE, AFTER, and BEFORE_AND_AFTER. The default consistency level is EVENTUAL. You can configure the transaction consistency using RDS database parameter groups or at the session level. To learn more about each of the consistency levels, refer to the MySQL user guide.

With stronger consistency levels, your application spends more time waiting for changes to be propagated between DB instances within the cluster. You must carefully evaluate and choose the consistency models based on the application and performance requirements.

VPC peering – If the DB instances of your active/active cluster are in different VPCs, then we recommend configuring VPC peering between the VPCs to ensure that the DB instances can connect over the private IPs.

Monitoring and Alerting –We recommend that your application continuously monitor the replication status so that it can detect when a DB instance is unresponsive or has been expelled from the cluster, and reroute traffic accordingly. You can use the following performance_schema tables to monitor the health of your active/active cluster.

performance_schema.replication_group_members: Provides the status of each DB instance that are part of the cluster.

performance_schema.replication_group_member_stats: Provides cluster-level information related to the certification, as well as statistics for the transactions received and originated by each DB instance in the cluster.

performance_schema.replication_connection_status: Provides the current status of the replication I/O thread that handles the replica’s connection to the source DB instance.

Setting-up an active/active cluster with RDS for MySQL instances using Group Replication plugin

You can either create an active/active cluster from scratch, or, by migrating an existing Amazon RDS for MySQL instance to an active/active cluster setup. You can set up active/active replication across Single-AZ or Multi-AZ instances, or any combination of the two.

Creating a new active/active cluster

To create an active/active cluster from scratch, the RDS for MySQL DB instances should be newly created and should not have any data written into them. If any of the DB instances already contain data, then you must configure group replication by migrating the instance to an active/active cluster as described in the next section.

In the following example, we setup a 3-node active/active cluster from scratch. To create an active/active cluster from scratch, do the following.

  1. Create the DB parameter group with group replication parameters configured.
    For more information, refer to Creating a DB Parameter Group. You can either use the same parameter group for all the 3 DB instances, or create a separate parameter group for each instance in the active/active cluster. If you use separate parameter groups for each of the instances, then ensure that the Group Replication configuration parameters are the same in each parameter group, otherwise group replication will fail to start. For the ease of demonstration, we will use a single parameter group for all 3 instances in the active/active cluster.
  2. Configure the parameter group with the following parameters.
Parameter Name Value Additional Comments
binlog_format ROW Setting the value of binlog_format to ROW is a pre-requisite for group replication to work. The default value for this parameter is set to MIXED.
group_replication_group_name 11111111-2222-3333-4444-555555555555 This UUID is used for generation. You can modify it to use a UUID of your choice. You can generate a MySQL UUID by connecting to a MySQL DB instance and running SELECT UUID() .
gtid_mode ON Used by Group Replication to track which transactions have been committed on each server instance. This is a pre-requisite for GR to function.
enforce_gtid_consistency ON This is a pre-requisite for Group Replication to function.
rds.custom_dns_resolution 1 This parameter specifies whether to allow DNS resolution from
the Amazon DNS server in your VPC. DNS
resolution must be enabled when group
replication is enabled with the
rds.group_replication_enabled parameter.
rds.group_replication_enabled 1 Enables group replication feature on the cluster
slave_preserve_commit_order ON This parameter is required to ensure that the correct order of transactions is maintained on each DB Instance, when transactions are executed in parallel
  1. Create 3 new instances using the parameter group created. For more information, refer to Creating a RDS MySQL DB Instance. Ensure that the parameter group created in step 1 is associated with the instance at the time of database creation. Each Amazon RDS for MySQL instance should be running MySQL 8.0.35 or higher.
  2. Once the instances are Available status. Connect to each instance and confirm that the ‘group name’ has been configured. For more information, refer to Creating and connecting to a MySQL DB instance
    mysql> show variables like ‘group_replication_group_name’;
    +------------------------------+--------------------------------------+
    | Variable_name | Value |
    +------------------------------+--------------------------------------+
    | group_replication_group_name | 11111111-2222-3333-4444-555555555555 |
    +------------------------------+--------------------------------------+
    1 row in set (0.00 sec)
  3. Configure the parameter group_replication_group_seeds to list the instances that will be part of the active/active cluster. We set this parameter to the endpoints of the 3 instances comma separated. The following screenshot shows the parameter values in our example.
  4. Confirm that the parameter changes have been propagated, by connecting to the instances and running the following query.
    mysql> show variables like '%seeds%';
    +-------------------------------+---------------------------------------------
    | Variable_name | Value |
    +-------------------------------+---------------------------------------------
    | group_replication_group_seeds | mysql80-gr7-n1.abcdefghijkl.us-west-2.rds.amazonaws.com:3306,mysql80-gr7-n2.abcdefghijkl.us-west-2.rds.amazonaws.com:3306,mysql80-gr7-n3.abcdefghijkl.us-west-2.rds.amazonaws.com |
    +-------------------------------+--------------------------------------------+
    1 row in set (0.00 sec)
  5. Configure the binary log retention, create the group replication user and the group replication recovery channel. Run the following statements on all the 3 instances of the active/active cluster.
    # Configure binary log retention.
    call mysql.rds_set_configuration('binlog retention
    hours', 168); -- 7 days binary log
    
    # Create the group replication user.
    call
    mysql.rds_group_replication_create_user('group_replication_user_password');
    call
    
    # Create the Group replication Recovery Channel
    mysql.rds_group_replication_set_recovery_channel('group_replication_user_password');
  6. Bootstrap the cluster by running the following stored procedure on one of the DB instances of the cluster.
    call mysql.rds_group_replication_start(1);

The statement starts group replication on one DB instance of the cluster. A parameter value of 1 to the stored procedure rds_group_replication_start, specifies that we are bootstrapping the cluster.

  1. To start group replication on the other DB instances of the cluster, run rds_group_replication_start with a parameter value of 0.
    call mysql.rds_group_replication_start(0);
  2. Confirm that group replication is running on all the 3 DB instances by running the following query. Confirm that the member state is online for all DB instances in the cluster.
    mysql> select * from performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+
    |CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
    +---------------------------+--------------------------------------+
    |group_replication_applier | d8f5281c-744c-11ee-9783-06fb8f36ecdb | ip-172-xx-x-xxx | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL |
    |group_replication_applier | d9ced226-744c-11ee-9257-06a0f6d23da7 | ip-172-xx-x-xxx | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL |
    |group_replication_applier | e7f6c424-744c-11ee-836f-0619032f4835 | ip-172-xx-x-xxx | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL |
    +---------------------------+--------------------------------------+
    3 rows in set (0.00 sec)
  3. Insert data on any instance and confirm that it is replicated to the remaining instances in the active/active cluster.

Migrating an existing RDS instance to an active/active cluster

Migrating an existing instance to an active/active cluster is similar to creating an active/active cluster from scratch, except that you need to perform a point in time restore (PITR) of your existing RDS for MySQL instance, instead of creating new DB instances.

Prerequisites

Ensure that the parameter group associated with your DB instance is not shared by any other RDS instance. A shared parameter group could cause misconfiguration on other instances that will not be a part of your active/active cluster. If it is shared, re-associate your instance to a separate parameter group. In this example, we create a new parameter group for the ease of demonstration. Your Amazon RDS for MySQL instance must be running MySQL 8.0.35 or higher.

  1. Create the DB parameter group with group replication parameters configured.
    For more information, refer to Creating a DB Parameter Group. You can either use the same parameter group for all the 3 DB instances, or create a separate parameter group for each instance in the active/active cluster. If you are using separate parameter group for each instance, ensure that the Group Replication configuration parameters are the same in each parameter group, otherwise group replication will fail to start. For ease of demonstration, we will use a single parameter group for all 3 instances of the active/active cluster.
  2. Configure the parameter group with the following parameters.
Parameter Name Value Additional Comments
binlog_format ROW Setting the value of binlog_format to ROW is a pre-requisite for group replication to work. The default value for this parameter is set to MIXED.
group_replication_group_name 11111111-2222-3333-4444-555555555555 This UUID is used for generation. You can modify it to use a UUID of your choice. You can generate a MySQL UUID by connecting to a MySQL DB instance and running SELECT UUID() .
gtid_mode ON Used by Group Replication to track which transactions have been committed on each server instance. This is a pre-requisite for GR to function.
enforce_gtid_consistency ON This is a pre-requisite for Group Replication to function.
rds.custom_dns_resolution 1 This parameter specifies whether to allow DNS resolution from
the Amazon DNS server in your VPC. DNS
resolution must be enabled when group
replication is enabled with the
rds.group_replication_enabled parameter.
rds.group_replication_enabled 1 Enables group replication feature on the cluster
slave_preserve_commit_order ON This parameter is required to ensure that the correct order of transactions is maintained on each DB Instance, when transactions are executed in parallel
  1. Modify the instance by associating it with the parameter group created in step1. For more information, refer to Associating a DB parameter group with a DB instance.
  2. Reboot the instance to allow the new parameter group changes to take effect. For more information, refer to Rebooting a DB instance.
  3. Create a snapshot of the current instance. For more information, refer to Creating a DB snapshot.
  4. When the snapshot has been completed, create the other 2 DB instances of the cluster by performing a point in time restore (PITR) of the current instances to the latest point in time. For more information, refer to Restoring a DB instance to a specified time . Since we need two additional DB instances for our cluster, you must perform the PITR action twice. Remember to associate the newly-created parameter group at the time of creation of the PITR’d instance.
  5. Set the parameter group_replication_group_seeds to the endpoints of the 3 instances. The following screenshot shows the parameter values in our example.
  6. Confirm that the parameter changes have been propagated by connecting to the instances and running the following query.
    mysql> show variables like '%seeds%';
    +-------------------------------+----------------------------------+
    | Variable_name | Value |
    +-------------------------------+----------------------------------+
    | group_replication_group_seeds | mysql80-gr7-n1.abcdefghijkl.us-west-2.rds.amazonaws.com:3306,mysql80-gr7-n2.abcdefghijkl.us-west-2.rds.amazonaws.com:3306,mysql80-gr7-n3.abcdefghijkl.us-west-2.rds.amazonaws.com |
    +-------------------------------+----------------------------------+
    1 row in set (0.00 sec)
  7. Configure the binary log retention, create the group replication user and the group replication recovery channel. Run the following statements on all 3 instances of the active/active cluster.
    # Configure binary log retention.
    call mysql.rds_set_configuration('binlog retention
    hours', 168); -- 7 days binary log
    
    # Create the group replication user.
    call
    mysql.rds_group_replication_create_user('group_replication_user_password');
    call
    
    # Create the Group replication Recovery Channel
    mysql.rds_group_replication_set_recovery_channel('group_replication_user_password');
  8. Bootstrap the cluster by running the following stored procedure on any one of the DB instances of the cluster.
    call mysql.rds_group_replication_start(1);

The statement starts group replication on 1 DB instance of the cluster. A parameter value of 1 to the stored procedure rds_group_replication_start, specifies that we are bootstrapping the cluster.

  1. To start group replication on the other DB instances of the cluster, run rds_group_replication_start with a parameter value of 0.
    call mysql.rds_group_replication_start(0);
  2. Confirm that group replication is running on all the 3 DB instances by running the following query. Confirm that the member state for each DB instance is online.
    mysql> select * from performance_schema.replication_group_members;
    +---------------------------+--------------------------------------+
    |CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
    +---------------------------+--------------------------------------+
    |group_replication_applier | d8f5281c-744c-11ee-9783-06fb8f36ecdb | ip-172-xx-x-xxx | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL |
    |group_replication_applier | d9ced226-744c-11ee-9257-06a0f6d23da7 | ip-172-xx-x-xxx | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL |
    |group_replication_applier | e7f6c424-744c-11ee-836f-0619032f4835 | ip-172-xx-x-xxx | 3306 | ONLINE | PRIMARY | 8.0.35 | MySQL |
    +---------------------------+--------------------------------------+
    3 rows in set (0.00 sec)
  3. Insert data on any instance and confirm that it is being replicated to the remaining instances in the active/active cluster.

Clean up

When you are ready to clean up your test setup, we recommend the following steps to convert your active/active cluster to standalone DB instance(s).

  1. Stop the group replication using the stored procedure – call rds_group_replication_stop ();
  2. Delete the DB instance(s) from the parameter group_replication_group_seeds in your custom parameter group.
  3. In the parameter group, delete the DB instance from the parameter group_replication_group_seeds that you are removing from the active-active cluster.
  4. Delete the parameters group_replication_group_name, rds.group_replication_enabled, and group_replication_group_seeds associated with the DB instance(s).
  5. Reboot the DB instance(s) for the updated parameter settings to take effect.

For more information refer to the RDS for MySQL user guide.

Conclusion

The introduction of Group Replication plugin to Amazon RDS for MySQL enables applications to implement continuous availability and optimize write latency across up to nine DB instances in an active/active configuration. The Group Replication plugin maintains data consistency across all the DB instances in the active/active cluster, ensuring that your critical data remains reliable and accessible.

Consider your application’s workload patterns to decide whether implementing active/active replication using the Group Replication plugin is the right choice for your use case. Switch between multiple configuration options, monitor performance, and scale your database infrastructure, as needed, to optimize your application’s performance and reliability, and provide a seamless experience to your users.

We have adapted the concepts from this post into a deployable solution, now available as Guidance for Active-Active Replication on Amazon RDS for MySQL in the AWS Solutions Library. To get started, review the architecture diagrams and the corresponding AWS Well-Architected framework, then deploy the sample code to implement the Guidance into your workloads.

If you have questions or feedback, leave a comment.


About the Authors

Mershad Irani is a Database Engineer (DBE) at Amazon Web Services. He is a Subject Matter Expert (SME) on Amazon RDS for MSQL and Amazon Aurora MySQL.

Vijay Karumajji is a Principal Database Solutions Architect with Amazon Web Services. He works with our customers to provide guidance and technical assistance on database projects, helping them improve the value of their solutions when using AWS.