AWS Machine Learning Blog
Accelerate your ML lifecycle using the new and improved Amazon SageMaker Python SDK – Part 2: ModelBuilder
In Part 1 of this series, we introduced the newly launched ModelTrainer class on the Amazon SageMaker Python SDK and its benefits, and showed you how to fine-tune a Meta Llama 3.1 8B model on a custom dataset. In this post, we look at the enhancements to the ModelBuilder class, which lets you seamlessly deploy a model from ModelTrainer to a SageMaker endpoint, and provides a single interface for multiple deployment configurations.
In November 2023, we launched the ModelBuilder class (see Package and deploy models faster with new tools and guided workflows in Amazon SageMaker and Package and deploy classical ML and LLMs easily with Amazon SageMaker, part 1: PySDK Improvements), which reduced the complexity of initial setup of creating a SageMaker endpoint such as creating an endpoint configuration, choosing the container, serialization and deserialization, and more, and helps you create a deployable model in a single step. The recent update enhances usability of the ModelBuilder class for a wide range of use cases, particularly in the rapidly evolving field of generative AI. In this post, we deep dive into the enhancements made to the ModelBuilder class, and show you how to seamlessly deploy the fine-tuned model from Part 1 to a SageMaker endpoint.
Improvements to the ModelBuilder class
We’ve made the following usability improvements to the ModelBuilder class:
- Seamless transition from training to inference – ModelBuilder now integrates directly with SageMaker training interfaces to make sure that the correct file path to the latest trained model artifact is automatically computed, simplifying the workflow from model training to deployment.
- Unified inference interface – Previously, the SageMaker SDK offered separate interfaces and workflows for different types of inference, such as real-time, batch, serverless, and asynchronous inference. To simplify the model deployment process and provide a consistent experience, we have enhanced ModelBuilder to serve as a unified interface that supports multiple inference types.
- Ease of development, testing, and production handoff – We are adding support for local mode testing with ModelBuilder so that users can effortlessly debug and test their processing and inference scripts with faster local testing without including a container, and a new function that outputs the latest container image for a given framework so you don’t have to update the code each time a new LMI release comes out.
- Customizable inference preprocessing and postprocessing – ModelBuilder now allows you to customize preprocessing and postprocessing steps for inference. By enabling scripts to filter content and remove personally identifiable information (PII), this integration streamlines the deployment process, encapsulating the necessary steps within the model configuration for better management and deployment of models with specific inference requirements.
- Benchmarking support – The new benchmarking support in ModelBuilder empowers you to evaluate deployment options—like endpoints and containers—based on key performance metrics such as latency and cost. With the introduction of a Benchmarking API, you can test scenarios and make informed decisions, optimizing your models for peak performance before production. This enhances efficiency and provides cost-effective deployments.
In the following sections, we discuss these improvements in more detail and demonstrate how to customize, test, and deploy your model.
Seamless deployment from ModelTrainer class
ModelBuilder integrates seamlessly with the ModelTrainer class; you can simply pass the ModelTrainer object that was used for training the model directly to ModelBuilder in the model parameter. In addition to the ModelTrainer, ModelBuilder also supports the Estimator class and the result of the SageMaker Core TrainingJob.create()
function, and automatically parses the model artifacts to create a SageMaker Model object. With resource chaining, you can build and deploy the model as shown in the following example. If you followed Part 1 of this series to fine-tune a Meta Llama 3.1 8B model, you can pass the model_trainer
object as follows:
Customize the model using InferenceSpec
The InferenceSpec
class allows you to customize the model by providing custom logic to load and invoke the model, and specify any preprocessing logic or postprocessing logic as needed. For SageMaker endpoints, preprocessing and postprocessing scripts are often used as part of the inference pipeline to handle tasks that are required before and after the data is sent to the model for predictions, especially in the case of complex workflows or non-standard models. The following example shows how you can specify the custom logic using InferenceSpec
:
Test using local and in process mode
Deploying a trained model to a SageMaker endpoint involves creating a SageMaker model and configuring the endpoint. This includes the inference script, any serialization or deserialization required, the model artifact location in Amazon Simple Storage Service (Amazon S3), the container image URI, the right instance type and count, and more. The machine learning (ML) practitioners need to iterate over these settings before finally deploying the endpoint to SageMaker for inference. The ModelBuilder offers two modes for quick prototyping:
- In process mode – In this case, the inferences are made directly within the same inference process. This is highly useful in quickly testing the inference logic provided through
InferenceSpec
and provides immediate feedback during experimentation. - Local mode – The model is deployed and run as a local container. This is achieved by setting the mode to
LOCAL_CONTAINER
when you build the model. This is helpful to mimic the same environment as the SageMaker endpoint. Refer to the following notebook for an example.
The following code is an example of running inference in process mode, with a custom InferenceSpec
:
As the next steps, you can test it in local container mode as shown in the following code, by adding the image_uri
. You will need to include the model_server
argument when you include the image_uri
.
Deploy the model
When testing is complete, you can now deploy the model to a real-time endpoint for predictions by updating the mode to mode.SAGEMAKER_ENDPOINT
and providing an instance type and size:
In addition to real-time inference, SageMaker supports serverless inference, asynchronous inference, and batch inference modes for deployment. You can also use InferenceComponents
to abstract your models and assign CPU, GPU, accelerators, and scaling policies per model. To learn more, see Reduce model deployment costs by 50% on average using the latest features of Amazon SageMaker.
After you have the ModelBuilder
object, you can deploy to any of these options simply by adding the corresponding inference configurations when deploying the model. By default, if the mode is not provided, the model is deployed to a real-time endpoint. The following are examples of other configurations:
- Deploy the model to a serverless endpoint:
from sagemaker.serverless.serverless_inference_config import ServerlessInferenceConfig
predictor = model_builder.deploy(
endpoint_name="serverless-endpoint",
inference_config=ServerlessInferenceConfig(memory_size_in_mb=2048))
- Deploy the model to an asynchronous endpoint:
- Run a batch transform job for offline inference on a dataset:
- Deploy a multi-model endpoint using
InferenceComponent
:
Clean up
If you created any endpoints when following this post, you will incur charges while it is up and running. As best practice, delete any endpoints if they are no longer required, either using the AWS Management Console, or using the following code:
Conclusion
In this two-part series, we introduced the ModelTrainer and the ModelBuilder enhancements in the SageMaker Python SDK. Both classes aim to reduce the complexity and cognitive overhead for data scientists, providing you with a straightforward and intuitive interface to train and deploy models, both locally on your SageMaker notebooks and to remote SageMaker endpoints.
We encourage you to try out the SageMaker SDK enhancements (SageMaker Core, ModelTrainer, and ModelBuilder) by referring to the SDK documentation and sample notebooks on the GitHub repo, and let us know your feedback in the comments!
About the Authors
Durga Sury is a Senior Solutions Architect on the Amazon SageMaker team. Over the past 5 years, she has worked with multiple enterprise customers to set up a secure, scalable AI/ML platform built on SageMaker.
Shweta Singh is a Senior Product Manager in the Amazon SageMaker Machine Learning (ML) platform team at AWS, leading SageMaker Python SDK. She has worked in several product roles in Amazon for over 5 years. She has a Bachelor of Science degree in Computer Engineering and a Masters of Science in Financial Engineering, both from New York University.