AWS Big Data Blog
Implement Amazon EMR HBase Graceful Scaling
Apache HBase is a massively scalable, distributed big data store in the Apache Hadoop ecosystem. We can use Amazon EMR with HBase on top of Amazon Simple Storage Service (Amazon S3) for random, strictly consistent real-time access for tables with Apache Kylin. It ingests data through spark jobs and queries the HTables through Apache Kylin cubes. The HBase cluster uses HBase write-ahead logs (WAL) instead of Amazon EMR WAL.
A time goes by, companies may want to scale in long-running Amazon EMR HBase clusters because of issues such as Amazon Elastic Compute Cloud (Amazon EC2) scheduling events and budget concerns. Another issue is that companies may use Spot Instances and auto scaling for task nodes for short-term parallel computation power, like MapReduce tasks and spark executors. Amazon EMR also runs HBase region servers on task nodes for Amazon EMR on S3 clusters. Spot interruptions will lead to an unexpected shutdown on HBase region servers. For an Amazon EMR HBase cluster without enabling write-ahead logs (WAL) for Amazon EMR feature, an unexpected shutdown on HBase region servers will cause WAL splits with server recovery process, and it will bring extra load to the cluster and sometimes makes HTables inconsistent.
For these reasons, administrators look for a way to scale-in Amazon EMR HBase cluster gracefully and stop all HBase region servers on the task nodes.
This post demonstrates how to gracefully decommission target region servers programmatically. The scripts do the following tasks. The script also tests successfully in Amazon EMR 7.3.0, Amazon EMR 6.15.0, and 5.36.2.
- Automatically move the HRegions through a script
- Raise the decommission priority
- Decommission HBase region servers gracefully
- Prevent Amazon EMR provisioning region servers on task nodes by Amazon EMR software configurations
- Prevent Amazon EMR provisioning region servers on task nodes by Amazon EMR steps
Overview of solution
For graceful scaling in, the script uses HBase built-in graceful_stop.sh
to move regions to other region servers to avoid WAL splits when decommissioning nodes. The script uses HDFS CLI and web interface to make sure there are no missing and corrupted HDFS block during the scaling events. To prevent Amazon EMR provisions HBase region servers on task nodes, administrators need to specify software configurations per instance groups when launching a cluster. For existing clusters, administrators can either use a step to terminate HBase region servers on task nodes, or reconfigure the task instance group’s HBase storagerootdir
.
Solution
For a running Amazon EMR cluster, administrators can use AWS Command Line Interface (AWS CLI) to issue a modify-instance-groups
with EC2InstanceIdsToTerminate
to terminate specified instances immediately. But terminating an instance in this way can cause a data loss and unpredictable cluster behavior when HDFS blocks have not enough copies or there are ongoing tasks on those decommissioned nodes. To avoid these risks, administrators can send a modify-instance-groups
with a new instance request count without a specific instance ID that administrators want to terminate. This command triggers a graceful decommission process on the Amazon EMR side. However, Amazon EMR only supports graceful decommission for YARN and HDFS. Amazon EMR doesn’t support graceful decommission for HBase.
Hence, administrators can try method 1, as described later in this post, to raise the decommission priority of the decommission targets as the first step. In case tweaking the decommissions priority didn’t work, move forward to the second approach, method 2. Method 2 is to stop the resizing request, and move the HRegions manually before terminating the target core nodes. Note that Amazon EMR is a managed service. Amazon EMR service will terminate the EC2 instance after anyone stops it or detach its Amazon Elastic Block Store (Amazon EBS) volumes. Therefore, don’t try to detach EBS volumes on the decommission targets and attach them to new nodes.
Method 1: Decommission HBase region servers through resizing
To decommission Hadoop nodes, administrators can add decommission targets to HDFS’s and YARN’s exclude list, which were dfs.hosts.exclude
and yarn.nodes.exclude.xml
. However, Amazon EMR disallows manual update to these files. The reason is that the Amazon EMR service daemon, master instance controller, is the only valid process to update these two files on master nodes. Manual updates to these two files will be reset.
Thus, one of the most accessible ways to raise a core node’s decommission priority according to Amazon EMR is having less instance controller heartbeat.
As the first step, pass move_regions
to the following script on Amazon S3, blog_HBase_graceful_decommission.sh
, as an Amazon EMR step to move HRegions to other region servers and shutdown processes of region server and instance controller. Please also provide targetRS
and S3Path
to blog_HBase_graceful_decommission.sh
. targetRS
represents to the private DNS of the decommission target region server. S3Path
represents the location of the region migration script.
This step needs to be run in off-peak hours. After all HRegions on the target region server are moved to other nodes, splitting WAL activities after stopping the HBase region server will generate a very low workload to the cluster because it serves 0 regions.
For more information , refer to blog_HBase_graceful_decommission.sh
.
Taking a closer look at the move_regions
option in blog_HBase_graceful_decommission.sh
, this script disables the region balancer and moves the regions to other region servers. The script retrieves Secure Shell (SSH) credentials from AWS Secrets Manager to access worker nodes.
In addition, the script included some AWS CLI operations. Please make sure the instance profile, EMR_EC2_DefaultRole
, can operate the following APIs and have SecretsManagaerReadWrite
permission.
Amazon EMR APIs:
describe-cluster
list-instances
modify-instance-groups
Amazon S3 APIs:
cp
Secrets Manager APIs:
get-secret-value
In Amazon EMR 5.x, HBase on Amazon S3 will make the master node also work as a region server hosting hbase:meta
regions. This script will get stuck when trying to move non-hbase:meta
HRegions to the master. To automate the script, the parameter, maxthreads
, is increased to move regions through multiple threads. By moving regions in a while loop, one of the threads got a runtime error because it tries to move non-hbase:meta
HRegions to the master node. Other threads can keep on moving HRegions to other region servers. After the only stuck thread timed out after 300 seconds, it moves forward to the next run. After six retries, manual actions will be required, such as using a move action through the HBase shell for the remaining regions’ movement or resubmitting the step.
The following is the syntax to use the script to invoke the move_regions
function through blog_HBase_graceful_decommission.sh
as an Amazon EMR step:
Here’s an Amazon EMR step example to move regions:
In the HBase web UI, the target region server will serve 0 regions after the evacuation, as shown in the following screenshot.
After that, the stop_RS_IC
function in the script stopped the HBase region server and instance controller process on the decommission target after making sure that there is no running YARN container on that node.
Note that the script is for Amazon EMR 5.30.0 and later release versions. For Amazon EMR 4.x-5.29.0 release versions, stop_RS_IC
in the script needs to be updated by referring to How do I restart a service in Amazon EMR? In the AWS Knowledge Center. Also, in Amazon EMR versions earlier than 5.30.0, Amazon EMR uses a service nanny to watch the status of other processes. If a service nanny automatically restarts the instance controller, please stop the service nanny using the stop_RS_IC
function before stopping the instance controller on that node. Here’s an example:
After the step is successfully completed, scale in and define (current core node amount is −1) as the desired target node amount using the Amazon EMR console. Amazon EMR might pick up the target core node to decommission it because the instance controller isn’t running on that node. There can be a few minutes of delay for Amazon EMR to detect the heartbeat loss of that target node through polling the instance controller. Thus, make sure the workload is very low and there will be no container to the target node for a while.
Stopping the instance controller merely increases the decommissioning priority. But method 1 doesn’t guarantee that the target core node will be picked up as the decommissioning target by Amazon EMR. If Amazon EMR doesn’t pick up the decommission target as the decommissioning victim after using method 1, administrators can stop the resize activity using the AWS Management Console. Then, proceed to method 2.
Method 2: Manually decommission the target core nodes
Administrators can terminate the node using the EC2InstanceIdsToTerminate
option in the modify-instance-groups
API. But this action will directly terminate the EC2 instance and will risk losing HDFS blocks. To mitigate the risk of having a data loss, administrators can use the following steps in off-peak hours with zero or very few running jobs.
First, run the move_hregions
function through blog_HBase_graceful_decommission.sh
as an Amazon EMR step in method 1. The function moves HRegions to other region servers and stopped the HBase region server as well as the instance controller process.
Then, run the terminate_ec2
function in blog_HBase_graceful_decommission.sh
as an Amazon EMR step. To run this function successfully, please provide the target instance group ID and target instance ID to the script. This function merely terminates one node at a time by specifying the EC2InstanceIdsToTerminate
option in the modify-instance-groups
API. This makes sure that the core nodes are not terminated back-to-back and lowered the risks of missing HDFS blocks. It inspects HDFS and makes sure all HDFS blocks had at least two copies. If an HDFS block have only one copy, the script will exit with an error message similar to, “Some HDFS blocks have only 1 copy. Please increase HDFS replication factor through the following command for existing HDFS blocks.”
To make sure all upcoming HDFS blocks have at least two copies, reconfigure the core instance group with the following software configuration:
In addition, the terminateEC2
function compares the metadata of the replicating blocks before and after terminating the core node using hdfs dfsadmin -report
. This makes sure no under-replicating, corrupted, or missing HDFS block increased.
The terminateEC2
function tracked decommission status. The script will complete after the decommission completes. It can take some time to recover HDFS blocks. The elapsed time depends on several factors such as the total number of blocks, I/O, bandwidth, HDFS handler amount, and name node resources. If there are many HDFS blocks to be recovered, it may take a few hours to complete. Before running the script, please make sure that the instance profile, EMR_EC2_DefaultRole
, have permission of elasticmapreduce:ModifyInstanceGroups
.
The following is the syntax to use the script to invoke the terminate_ec2
function through blog_HBase_graceful_decommission.sh
as an Amazon EMR step:
Here’s an Amazon EMR step example to move regions:
While invoking terminate_ec2
, the script checks HDFS Name Node Web UI for the decommission target to understand how many blocks need to be recovered on other nodes after submitting the decommission request. Here are the steps:
- On the Amazon EMR console, version 6.x, find HDFS NameNode web UI. For example, enter http://<master-node-public-DNS>:9870
- On the top menu bar, choose Datanodes
- In the In operation section, check the on-service data nodes and the total number of data blocks on the nodes, as shown in the following screenshot.
- To view the HDFS decommissioning progress, go to Overview, as shown in the following screenshot.
On the Datanodes page, the decommission target node will not have a green checkmark, and the node will be in the Decommissioning section, as shown in the following screenshot.
The step’s STDOUT also reveals the decommission status:
The decommission target will transit from Decommissioning to Decommissioned in the HDFS NameNode web UI, as shown in the following screenshot.
The decommissioned target will appear in the Dead datanodes section in the step’s STDOUT
after the process is completed:
After the target node is decommissioned, the hdfs dfsadmin report
will be displayed in the last section in the step’s STDOUT
. There should be no difference between rep_blocks_${beforeDate}
and rep_blocks_${afterDate}
as described in the script. It means no additional amount of under-replicated, missing, or corrupt blocks after the decommission. In HBase web UI, the decommissioned region server will be moved to dead region servers. The dead region server records will be reset after restarting HMaster during routine maintenance.
After the Amazon EMR step is completed without errors, please repeat the preceding steps to decommission the next target core node because administrators may have more than one core nodes to decommission.
After administrators complete all decommission tasks, administrators can manually enable the HBase balancer through the HBase shell again:
Prevent Amazon EMR from provisioning HBase region servers on task nodes
For new clusters, configure HBase settings for master and core groups only and keep the HBase settings empty when launching an Amazon EMR HBase on an S3 cluster. This prevents provisioning HBase region servers on task nodes.
For example, define configurations for applications other than HBase settings in the software configuration textbox in the Software settings section on the Amazon EMR console, as shown in the following screenshot.
Then, configure HBase settings in Node configuration – optional for each instance group in the Cluster configuration – required section, as shown in the following screenshot.
For master and core instance groups, HBase configurations will be like the following screenshot.
Here’s a json formatted example:
For task instance groups, there will be no HBase configuration, as shown in the following screenshot.
Here’s a json formatted example:
Here’s an example in AWS CLI:
Stop decommission the HBase region servers on task nodes
For an existing Amazon EMR HBase on an S3 cluster, pass stop_and_check_task_rs
to blog_HBase_graceful_decommission.sh
as an Amazon EMR step to stop HBase region servers on nodes in a task instance group. The script requirs a task instance group ID and an S3 location to place sharing scripts for task nodes.
The following is the syntax to pass stop_and_check_task_rs
to blog_HBase_graceful_decommission.sh
as an Amazon EMR step:
Here’s an Amazon EMR step example to stop HBase regions on nodes in a task group:
This step above not only stops HBase region servers on existing task nodes. To avoid provisioning HBase region servers on new task nodes, the script also reconfigures and scales in the task group. Here are the steps:
- Using the
move_regions
function, inblog_HBase_graceful_decommission.sh
, move HRegions on the task group to other nodes and stop region servers on those task nodes.
After making sure that the HBase region servers are stopped at these task nodes, the script reconfigures the task instance group. The reconfiguration details are to let HBase rootdir
point to a non-existing location. These settings only apply to the task group. Here’s an example:
When the task group’s state returns to RUNNING, the script scales in these task nodes to 0. New task nodes in the upcoming scaling out events will not run HBase region servers.
Conclusion
These scaling steps demonstrate how to handle Amazon EMR HBase scaling gracefully. The functions in the script can help administrators to resolve problems when companies want to gracefully scale the Amazon EMR HBase on S3 clusters without Amazon EMR WAL.
If you have a similar request to scale in an Amazon EMR HBase on an S3 cluster gracefully because the cluster doesn’t enable Amazon EMR WAL, you can refer to this post. Please test the steps in the testing environment for verifications first. After you confirm the steps can meet your production requirements, you can proceed and apply the steps to production environment.
About the Authors
Yu-Ting Su is a Sr. Hadoop Systems Engineer at Amazon Web Services (AWS). Her expertise is in Amazon EMR and Amazon OpenSearch Service. She’s passionate about distributing computation and helping people to bring their ideas to life.
Hsing-Han Wang is a Cloud Support Engineer at Amazon Web Services (AWS). He focuses on Amazon EMR and AWS Lambda. Outside of work, he enjoys hiking and jogging, and he is also an Eorzean.
Cheng Wang is a Technical Account Manager at AWS who has over 10 years of industry experience, focusing on enterprise service support, data analysis, and business intelligence solutions.
Chris Li is an Enterprise Support manager at AWS. He leads a team of Technical Account Managers to solve complex customer problems and implement well-structured solutions.