Overview
MLOps Workload Orchestrator helps you streamline and enforce architecture best practices for machine learning (ML) model productionization. This AWS Solution is an extendable framework that provides a standard interface for managing ML pipelines for AWS ML services and third-party services.
The solution’s template allows you to train models, upload your trained models (also referred to as bring your own model [BYOM]), configure the pipeline orchestration, and monitor the pipeline's operations. By using this solution, you can increase your team’s agility and efficiency by allowing them to repeat successful processes at scale.
Benefits
Use the Amazon SageMaker Model Dashboard to view your solution-created Amazon SageMaker resources (such as models, endpoints, model cards, and batch transform jobs).
Technical details
You can automatically deploy this architecture using the implementation guide and the accompanying AWS CloudFormation template. To support multiple use cases and business needs, the solution provides two AWS CloudFormation templates:
- Use the single-account template to deploy all of the solution’s pipelines in the same AWS account. This option is suitable for experimentation, development, and/or small-scale production workloads.
- Use the multi-account template to provision multiple environments (for example, development, staging, and production) across different AWS accounts, which improves governance and increases security and control of the ML pipeline’s deployment, provides safe experimentation and faster innovation, and keeps production data and workloads secure and available to help ensure business continuity.
-
Option 1 - Single-account deployment
-
Option 2 - Multi-account deployment
-
Option 1 - Single-account deployment
-
Step 1
The Orchestrator (solution owner or DevOps engineer) launches the solution in the AWS account and selects the desired options (for example, using Amazon SageMaker model registry, or providing an existing Amazon Simple Storage Service [Amazon S3] bucket).Step 2
The Orchestrator uploads the required assets for the target pipeline (for example, model artifact, training data, or custom algorithm zip file) into the S3 assets bucket. If using Amazon SageMaker model registry, the Orchestrator (or an automated pipeline) must register the model with the model registry.Step 3a
The solution provisions a single account AWS CodePipeline by either sending an API call to Amazon API Gateway or by committing the mlops-config.json file to the Git repository.
Step 3b
Depending on the pipeline type, the orchestrator AWS Lambda function packages the target AWS CloudFormation template and its parameters and configurations using the body of the API call or the mlops-config.json file, and then uses it as the source stage for the CodePipeline instance.Step 4
The DeployPipeline stage takes the packaged CloudFormation template and its parameters and configurations and deploys the target pipeline into the same account.Step 5
After the target pipeline is provisioned, users can access its functionalities. An Amazon Simple Notification Service (Amazon SNS) notification is sent to the email provided in the solution’s launch parameters. -
Option 2 - Multi-account deployment
-
Step 1
The Orchestrator (solution owner or DevOps engineer with admin access to the orchestrator account) provides the AWS Organizations information (for example, development, staging, and production organizational unit IDs and account numbers).
They also specify the desired options (for example, using SageMaker model registry, or providing an existing S3 bucket), and then launch the solution in their AWS account.
Step 2
The Orchestrator uploads the required assets for the target pipeline (for example, model artifact, training data, or custom algorithm zip file) into the S3 assets bucket in the AWS Orchestrator account. If using SageMaker model registry, the Orchestrator (or an automated pipeline) must register the model with the model registry.
Step 3a
The solution provisions a multi-account CodePipeline instance by either sending an API call to API Gateway or by committing the mlops-config.json file to the Git repository.
Step 3b
Depending on the pipeline type, the orchestrator Lambda function packages the target CloudFormation template and its parameters and configurations for each stage using the body of the API call or the mlops-config.json file, and then uses it as the source stage for the CodePipeline instance.Step 4
The DeployDev stage takes the packaged CloudFormation template and its parameters and configurations and deploys the target pipeline into the development account.Step 5
After the target pipeline is provisioned into the development account, the developer can then iterate on the pipeline.Step 6
After the development is finished, the Orchestrator (or another authorized account) manually approves the DeployStaging action to move to the DeployStaging Stage.Step 7
The DeployStaging stage deploys the target pipeline into the staging account, using the staging configuration.Step 8
Testers perform different tests on the deployed pipeline.Step 9
After the pipeline passes quality tests, the Orchestrator can approve the DeployProd action.Step 10
The DeployProd stage deploys the target pipeline (with production configurations) into the production account.Step 11
The target pipeline is live in production. An SNS notification is sent to the email provided in the solution’s launch parameters.
Related content
In collaboration with the AWS Partner Solutions Architect and AWS Solutions Library teams, Cognizant built its MLOps Model Lifecycle Orchestrator solution on top of the MLOps Workload Orchestrator solution.
- Publish Date