AWS Machine Learning Blog
Category: Security
Security best practices to consider while fine-tuning models in Amazon Bedrock
In this post, we implemented secure fine-tuning jobs in Amazon Bedrock, which is crucial for protecting sensitive data and maintaining the integrity of your AI models. By following the best practices outlined in this post, including proper IAM role configuration, encryption at rest and in transit, and network isolation, you can significantly enhance the security posture of your fine-tuning processes.
Video security analysis for privileged access management using generative AI and Amazon Bedrock
In this post, we show you an innovative solution to a challenge faced by security teams in highly regulated industries: the efficient security analysis of vast amounts of video recordings from Privileged Access Management (PAM) systems. We demonstrate how you can use Anthropic’s Claude 3 family of models and Amazon Bedrock to perform the complex task of analyzing video recordings of server console sessions and perform queries to highlight any potential security anomalies.
Efficiently build and tune custom log anomaly detection models with Amazon SageMaker
In this post, we walk you through the process to build an automated mechanism using Amazon SageMaker to process your log data, run training iterations over it to obtain the best-performing anomaly detection model, and register it with the Amazon SageMaker Model Registry for your customers to use it.
Building automations to accelerate remediation of AWS Security Hub control findings using Amazon Bedrock and AWS Systems Manager
In this post, we will harness the power of generative artificial intelligence (AI) and Amazon Bedrock to help organizations simplify and effectively manage remediations of AWS Security Hub control findings.
Connect to Amazon services using AWS PrivateLink in Amazon SageMaker
In this post, we present a solution for configuring SageMaker notebook instances to connect to Amazon Bedrock and other AWS services with the use of AWS PrivateLink and Amazon Elastic Compute Cloud (Amazon EC2) security groups.
A secure approach to generative AI with AWS
Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. At AWS, our top priority is safeguarding the security and confidentiality of our customers’ workloads. We think about security across the three layers of our generative AI stack …
Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker
Customers of every size and industry are innovating on AWS by infusing machine learning (ML) into their products and services. Recent developments in generative AI models have further sped up the need of ML adoption across industries. However, implementing security, data privacy, and governance controls are still key challenges faced by customers when implementing ML […]
Enable fully homomorphic encryption with Amazon SageMaker endpoints for secure, real-time inferencing
This is joint post co-written by Leidos and AWS. Leidos is a FORTUNE 500 science and technology solutions leader working to address some of the world’s toughest challenges in the defense, intelligence, homeland security, civil, and healthcare markets. Leidos has partnered with AWS to develop an approach to privacy-preserving, confidential machine learning (ML) modeling where […]