The Imitation Game: Encapsulate Your OSB Workload at Scale
The Imitation Game: Encapsulate Your OSB Workload at Scale - Ian Hoang & Govind Kamat, Amazon Web Services While basic regression testing in OpenSearch is straightforward, accurately replicating production workloads for performance testing remains a significant challenge. When building custom benchmarking solutions, organizations struggle with three data-related challenges — ensuring that test data is representative, secure, and scalable. OpenSearch Benchmark introduces two game-changing features that transform how we approach these challenges. Through synthetic data generation, users can create realistic, privacy-compliant datasets that mirror production patterns. When paired with OSB’s data streaming capability, users can create dynamic, scalable workloads that reflect real-world scenarios. In this talk, we’ll cover how these powerful features help users: - Generate production-like data without compromising sensitive information - Create representative workloads at terabyte-scale that match your desired use-cases - Make data-driven decisions about cluster configuration and optimization Whether you’re a user, developer, or solutions architect, you’ll learn how to leverage these features to build reliable and meaningful performance tests for your OpenSearch deployments.
