AWS News Blog

Build faster, more cost-efficient, highly accurate models with Amazon Bedrock Model Distillation (preview)

Voiced by Polly

Today, we’re announcing the availability of Amazon Bedrock Model Distillation in preview that automates the process of creating a distilled model for your specific use case by generating responses from a large foundation model (FM) called a teacher model and fine-tunes a smaller FM called a student model with the generated responses. It uses data synthesis techniques to improve response from a teacher model. Amazon Bedrock then hosts the final distilled model for inference giving you a faster and more cost-efficient model with accuracy close to the teacher model, for your use case.

Customers are excited to use the most powerful and accurate FMs on Amazon Bedrock for their generative AI applications. But for some use cases, the latency associated with these models isn’t ideal. In addition, customers are looking for better price performance as they scale their generative AI applications to many billions of user interactions. To reduce latency and be more cost-efficient for their use case, customers are turning to smaller models. However, for some use cases, smaller models can’t provide optimal accuracy. Fine-tuning models requires an additional skillset to create the high-quality labeled datasets to increase model accuracy for customer’s use cases.

With Amazon Bedrock Model Distillation, you can increase the accuracy of a smaller-sized student model to mimic a higher-performance teacher model with the process of knowledge transfer. You can create distilled models that for a certain use case, are up to five times faster and up to 75 percent less expensive than original large models, with less than two percent accuracy loss for use cases such as Retrieval Augmented Generation (RAG), by transferring knowledge from a teacher model of your choice to a student model in the same family.

How does it work?
Amazon Bedrock Model Distillation generates responses from teacher models, improves response generation from a teacher model by adding proprietary data synthesis, and fine-tunes a student model.

Amazon Bedrock employs various data synthesis techniques to enhance response generation from the teacher model and create high-quality fine-tuning datasets. These techniques are tailored to specific use cases. For instance, Amazon Bedrock may augment the training dataset by generating similar prompts, effectively increasing the volume of the fine-tuning dataset.

Alternatively, it can produce high-quality teacher responses by using provided prompt-response pairs as golden examples. At preview, Amazon Bedrock Model Distillation supports Anthropic, Meta, and Amazon models.

Get started with Amazon Bedrock Model Distillation
To get started, go to the Amazon Bedrock console and choose Custom models in the left navigation pane. Now you have three customization methods: Fine-tuning, Distillation, and Continued pre-training.

Choose Create Distillation job to start fine-tuning your model using model distillation.

Enter your distilled model name and job name.

Then, choose the teacher model and, based on your choice of the teacher model, select a student model from the list of available student models. The teacher and the student model must be from the same family. For example, if you choose Meta Llama 3.1 405B Instruct model as a teacher model, you can only choose either Llama 3.1 70B or 8B Instruct model as a student model.

To generate synthetic data, set the value of Max response length, an inference parameter to determine the response generated by the teacher model. Choose the distillation input dataset located in your Amazon Simple Storage Service (Amazon S3) bucket. This input dataset presents the prompts or golden prompt-response pairs for your use case. The input files must be in the dataset format according to your model. To learn more, visit Prepare the datasets in the Amazon Bedrock User Guide.

Then, choose Create Distillation job after setting up the Amazon S3 location to store the distillation output metrics data and permissions to write to Amazon S3 on your behalf.

After the distillation job is created successfully, you can track the training progress on the Jobs tab, and the model will be available on the Models tab.

Using production data with Amazon Bedrock Model Distillation
If you want to reuse your production data for distillation and skip generating teacher responses again, you do so by turning on model invocation logging to collect invocation logs, model input data, and model output data for all invocations in your AWS account used in Amazon Bedrock. Adding request metadata helps you to easily filter invocation logs at a later point.

request_params = {
    'modelId': 'meta.llama3-1-405b-instruct-v1:0',
    'messages': [
        {
            'role': 'user',
            'content': [
                {
                    "text": "What is model distillation in generative AI?"
                }
            ]
        }
    },
    'requestMetadata': {
    "ProjectName": "myLlamaDistilledModel",
    "CodeName": "myDistilledCode"
    }
}
response = bedrock_runtime_client.converse(**request_params)
pprint(response)
---
'output': {'message': {'content': [{'text': '\n''\n'
    'Model distillation is a technique in generative AI that involves training a smaller,'
    'more efficient model (the '"student") to mimic the behavior of a larger, '
    'more complex model '(the "teacher"). The goal of model distillation is to'
    'transfer the knowledge and capabilities of the teacher model to the student model,'
    'allowing the student to perform similarly well on a given task, but with much less computational'
    'resources and memory.\n'
    '\n'}]
    }
}

Next, when using Amazon Bedrock Model Distillation, select a teacher model whose accuracy you want to aim for your use case and a student model that you want to fine-tune. Then give access to Amazon Bedrock to read your invocation logs. Here, you can specify the request metadata filters so that only specific logs, which are valid for your use case, are read to fine-tune the student model. The teacher model selected for distillation and the model used in the invocation logs must be the same if you want Amazon Bedrock to reuse the responses from invocation logs.

Inference from your distilled model
Before using the distilled model, you need to purchase Provisioned Throughput for Amazon Bedrock and then use the resulting distilled model for inference. When you purchase Provisioned Throughput, you can select a commitment term, choose the number of model units, and check estimated hourly, daily, and monthly costs.

You can complete the model distillation job using AWS APIs, AWS SDKs, or the AWS Command Line Interface (AWS CLI). To learn more about using the AWS CLI, visit Code samples for model customization in the AWS documentation.

Things to know
Here are a few important things to know.

  • Model distillation aims to increase the accuracy of the student model to match the performance of the teacher model for your specific use case. Before you begin model distillation, we recommend that you evaluate different teacher models for your use case and select the teacher model that works well for your use case.
  • We recommend optimizing your prompts for your use case against which you find the teacher model accuracy to be acceptable. Submit these prompts as the distillation input data.
  • To choose a corresponding student model to fine-tune, evaluate the latency profiles of different student model options for your use case. The final distilled model will have the same latency profile as the student model that you select.
  • If a specific student model already performs well for your use case, then we recommend using the student model as is instead of creating a distilled model.

Join the preview!
Amazon Bedrock Model Distillation is now available in preview in the US East (N. Virginia) and US West (Oregon) AWS Regions. Check the full Region list for future updates. To learn more, visit Model Distillation in the Amazon Bedrock User Guide.

You pay the cost to generate synthetic data by the teacher model and the cost to fine-tune the student model during model distillation. After the distilled model is created, you pay the cost to store the distilled model monthly. Inference from the distilled model is charged under Provisioned Throughput per hour per model unit. To learn more, visit the Amazon Bedrock Pricing page.

Give Amazon Bedrock Model Distillation a try in the Amazon Bedrock console today and send feedback to AWS re:Post for Amazon Bedrock or through your usual AWS Support contacts.

Channy

Channy Yun (윤석찬)

Channy Yun (윤석찬)

Channy is a Principal Developer Advocate for AWS cloud. As an open web enthusiast and blogger at heart, he loves community-driven learning and sharing of technology.