AWS Machine Learning Blog

Category: Artificial Intelligence

Automate fine-tuning of Llama 3.x models with the new visual designer for Amazon SageMaker Pipelines

Automate fine-tuning of Llama 3.x models with the new visual designer for Amazon SageMaker Pipelines

In this post, we will show you how to set up an automated LLM customization (fine-tuning) workflow so that the Llama 3.x models from Meta can provide a high-quality summary of SEC filings for financial applications. Fine-tuning allows you to configure LLMs to achieve improved performance on your domain-specific tasks.

https://issues.amazon.com/issues/ML-15995

Implement Amazon SageMaker domain cross-Region disaster recovery using custom Amazon EFS instances

In this post, we guide you through a step-by-step process to seamlessly migrate and safeguard your SageMaker domain from one active Region to another passive or active Region, including all associated user profiles and files.

Amazon Bedrock Custom Model Import now generally available

We’re pleased to announce the general availability (GA) of Amazon Bedrock Custom Model Import. This feature empowers customers to import and use their customized models alongside existing foundation models (FMs) through a single, unified API.

Deploy a serverless web application to edit images using Amazon Bedrock

In this post, we explore a sample solution that you can use to deploy an image editing application by using AWS serverless services and generative AI services. We use Amazon Bedrock and an Amazon Titan FM that allow you to edit images by using prompts.

Brilliant words, brilliant writing: Using AWS AI chips to quickly deploy Meta LLama 3-powered applications

Brilliant words, brilliant writing: Using AWS AI chips to quickly deploy Meta LLama 3-powered applications

In this post, we will introduce how to use an Amazon EC2 Inf2 instance to cost-effectively deploy multiple industry-leading LLMs on AWS Inferentia2, a purpose-built AWS AI chip, helping customers to quickly test and open up an API interface to facilitate performance benchmarking and downstream application calls at the same time.

Use Amazon SageMaker Studio with a custom file system in Amazon EFS

In this post, we explore three scenarios demonstrating the versatility of integrating Amazon EFS with SageMaker Studio. These scenarios highlight how Amazon EFS can provide a scalable, secure, and collaborative data storage solution for data science teams.

Summarize call transcriptions securely with Amazon Transcribe and Amazon Bedrock Guardrails

Summarize call transcriptions securely with Amazon Transcribe and Amazon Bedrock Guardrails

In this post, we show you how to use Amazon Transcribe to get near real-time transcriptions of calls sent to Amazon Bedrock for summarization and sensitive data redaction. We’ll walk through an architecture that uses AWS Step Functions to orchestrate the process, providing seamless integration and efficient processing