AWS Big Data Blog
OpenSearch optimized instance (OR1) is game changing for indexing performance and cost
Amazon OpenSearch Service securely unlocks real-time search, monitoring, and analysis of business and operational data for use cases like application monitoring, log analytics, observability, and website search.
In this post, we examine the OR1 instance type, an OpenSearch optimized instance introduced on November 29, 2023.
OR1 is an instance type for Amazon OpenSearch Service that provides a cost-effective way to store large amounts of data. A domain with OR1 instances uses Amazon Elastic Block Store (Amazon EBS) volumes for primary storage, with data copied synchronously to Amazon Simple Storage Service (Amazon S3) as it arrives. OR1 instances provide increased indexing throughput with high durability.
To learn more about OR1, see the introductory blog post.
While actively writing to an index, we recommend that you keep one replica. However, you can switch to zero replicas after a rollover and the index is no longer being actively written.
This can be done safely because the data is persisted in Amazon S3 for durability.
Note that in case of a node failure and replacement, your data will be automatically restored from Amazon S3, but would be partially unavailable during the repair operation, so you should not consider it for cases where searches on non-actively written indices require high availability.
Goal
In this blog post, we’ll explore how OR1 impacts the performance of OpenSearch workloads.
By providing segment replication, OR1 instances save CPU cycles by indexing only on the primary shards. By doing that, the nodes are able to index more data with the same amount of compute, or to use fewer resources for indexing and thus have more available for search and other operations.
For this post, we’re going to consider an indexing-heavy workload and do some performance testing.
Traditionally, Amazon Elastic Compute Cloud (Amazon EC2) R6g instances are a high performant choice for indexing-heavy workloads, relying on Amazon EBS storage. Im4gn instances provide local NVMe SSD for high throughput and low latency disk writes.
We will compare OR1 indexing performance relative to these two instance types, focusing on indexing performance only for scope of this blog.
Setup
For our performance testing, we set up multiple components, as shown in the following figure:
For the testing process:
- AWS Step Functions orchestrates an initialization step to clean up the environment and set up the index mapping and to run the batch testing.
- AWS Batch runs parallel jobs to index log data in OpenTelemetry JSON format.
- The jobs run a custom Rust program that generates randomized logs using the OpenSearch Rust Client with AWS Identity and Access Management (IAM) authentication.
- The OpenSearch Service domain is set up with OpenSearch 2.11, two availability zones, fine-grained access control, encryption at rest using AWS Key Management Service (AWS KMS), and encryption in transit using TLS.
The index mapping, which is part of our initialization step, is as follows:
As you can see, we’re using a data stream to simplify the rollover configuration and keep the maximum primary shard size under 50 GiB, as per best practices.
We optimized the mapping to avoid any unnecessary indexing activity and use the flat_object field type to avoid field mapping explosion.
For reference, the Index State Management (ISM) policy we used is as follows:
Our average document size is 1.6 KiB and the bulk size is 4,000 documents per bulk, which makes approximately 6.26 MiB per bulk (uncompressed).
Testing protocol
The protocol parameters are as follows:
- Number of data nodes: 6 or 12
- Jobs parallelism: 75, 40
- Primary shard count: 12, 48, 96 (for 12 nodes)
- Number of replicas: 1 (total of 2 copies)
- Instance types (each with 16 vCPUs):
- or1.4xlarge.search
- r6g.4xlarge.search
- im4gn.4xlarge.search
Cluster | Instance type | vCPU | RAM | JVM size |
or1-target | or1.4xlarge.search | 16 | 128 | 32 |
im4gn-target | im4gn.4xlarge.search | 16 | 64 | 32 |
r6g-target | r6g.4xlarge.search | 16 | 128 | 32 |
Note that the im4gn cluster has half the memory of the other two, but still each environment has the same JVM heap size of approximately 32 GiB.
Performance testing results
For the performance testing, we started with 75 parallel jobs and 750 batches of 4,000 documents per client (a total 225 million documents). We then adjusted the number of shards, data nodes, replicas, and jobs.
Configuration 1: 6 data nodes, 12 primary shards, 1 replica
For this configuration, we used 6 data nodes, 12 primary shards, and 1 replica, we observed the following performance:
Cluster | CPU usage | Time taken | Indexing speed | |
or1-target | 65-80% | 24 min | 156 kdoc/s | 243 MiB/s |
im4gn-target | 89-97% | 34 min | 110 kdoc/s | 172 MiB/s |
r6g-target | 88-95% | 34 min | 110 kdoc/s | 172 MiB/s |
Highlighted in this table, im4gn and r6g clusters have very high CPU usage, triggering admission control, which rejects document.
The OR1 shows a CPU below 80 percent sustained, which is a very good target.
Things to keep in mind:
- In production, don’t forget to retry indexing with exponential backoff to avoid dropping unindexed documents because of intermittent rejections.
- The bulk indexing operation returns 200 OK but can have partial failures. The body of the response must be checked to validate that all the documents were indexed successfully.
By reducing the number of parallel jobs from 75 to 40, while maintaining 750 batches of 4,000 documents per client (total 120M documents), we get the following:
Cluster | CPU usage | Time taken | Indexing speed | |
or1-target | 25-60% | 20 min | 100 kdoc/s | 156 MiB/s |
im4gn-target | 75-93% | 19 min | 105 kdoc/s | 164 MiB/s |
r6g-target | 77-90% | 20 min | 100 kdoc/s | 156 MiB/s |
The throughput and CPU usage decreased, but the CPU remains high on Im4gn and R6g, while the OR1 is showing more CPU capacity to spare.
Configuration 2: 6 data nodes, 48 primary shards, 1 replica
For this configuration, we increased the number of primary shards from 12 to 48, which provides more parallelism for indexing:
Cluster | CPU usage | Time taken | Indexing speed | |
or1-target | 60-80% | 21 min | 178 kdoc/s | 278 MiB/s |
im4gn-target | 67-95% | 34 min | 110 kdoc/s | 172 MiB/s |
r6g-target | 70-88% | 37 min | 101 kdoc/s | 158 MiB/s |
The indexing throughput increased for the OR1, but the Im4gn and R6g didn’t see an improvement because their CPU utilization is still very high.
Reducing the parallel jobs to 40 and keeping 48 primary shards, we can see that the OR1 gets a little more pressure as the minimum CPU increases from 12 primary shards, and the CPU for R6g looks much better. For the Im4gn however, the CPU is still high.
Cluster | CPU usage | Time taken | Indexing speed | |
or1-target | 40-60% | 16 min | 125 kdoc/s | 195 MiB/s |
im4gn-target | 80-94% | 18 min | 111 kdoc/s | 173 MiB/s |
r6g-target | 70-80% | 21 min | 95 kdoc/s | 148 MiB/s |
Configuration 3: 12 data nodes, 96 primary shards, 1 replica
For this configuration, we started with the original configuration and added more compute capacity, moving from 6 nodes to 12 and increasing the number of primary shards to 96.
Cluster | CPU usage | Time taken | Indexing speed | |
or1-target | 40-60% | 18 min | 208 kdoc/s | 325 MiB/s |
im4gn-target | 74-90% | 20 min | 187 kdoc/s | 293 MiB/s |
r6g-target | 60-78% | 24 min | 156 kdoc/s | 244 MiB/s |
The OR1 and the R6g are performing well with CPU usage below 80 percent, with OR1 giving 33 percent better performance with 30 percent less CPU usage compared to R6g.
The Im4gn is still at 90 percent CPU, but the performance is also very good.
Reducing the number of parallel jobs from 75 to 40, we get:
Cluster | CPU usage | Time taken | Indexing speed | |
or1-target | 40-60% | 11 min | 182 kdoc/s | 284 MiB/s |
im4gn-target | 70-90% | 11 min | 182 kdoc/s | 284 MiB/s |
r6g-target | 60-77% | 12 min | 167 kdoc/s | 260 MiB/s |
Reducing the number of parallel jobs to 40 from 75 brought the OR1 and Im4gn instances on par and the R6g very close.
Interpretation
The OR1 instances speed up indexing because only the primary shards need to be written while the replica is produced by copying segments. While being more performant compared to Img4n and R6g instances, the CPU usage is also lower, which gives room for additional load (search) or cluster size reduction.
We can compare a 6-node OR1 cluster with 48 primary shards, indexing at 178 thousand documents per second, to a 12-node Im4gn cluster with 96 primary shards, indexing at 187 thousand documents per second or to a 12-node R6g cluster with 96 primary shards, indexing at 156 thousand documents per second.
The OR1 performs almost as well as the larger Im4gn cluster, and better than the larger R6g cluster.
How to size when using OR1 instances
As you can see in the results, OR1 instances can process more data at higher throughput rates. However, when increasing the number of primary shards, they don’t perform as well because of the remote backed storage.
To get the best throughput from the OR1 instance type, you can use larger batch sizes than usual, and use an Index State Management (ISM) policy to roll over your index based on size so that you can effectively limit the number of primary shards per index. You can also increase the number of connections because the OR1 instance type can handle more parallelism.
For search, OR1 doesn’t directly impact the search performance. However, as you can see, the CPU usage is lower on OR1 instances than on Im4gn and R6g instances. That enables either more activity (search and ingest), or the possibility to reduce the instance size or count, which would result in a cost reduction.
Conclusion and recommendations for OR1
The new OR1 instance type gives you more indexing power than the other instance types. This is important for indexing-heavy workloads, where you index in batch every day or have a high sustained throughput.
The OR1 instance type also enables cost reduction because their price for performance is 30 percent better than existing instance types. When adding more than one replica, price for performance will decrease because the CPU is barely impacted on an OR1 instance, while other instance types would have indexing throughput decrease.
Check out the complete instructions for optimizing your workload for indexing using this repost article.
About the author
Cédric Pelvet is a Principal AWS Specialist Solutions Architect. He helps customers design scalable solutions for real-time data and search workloads. In his free time, his activities are learning new languages and practicing the violin.