AWS Machine Learning Blog
Tag: AI/ML
Build an ecommerce product recommendation chatbot with Amazon Bedrock Agents
In this post, we show you how to build an ecommerce product recommendation chatbot using Amazon Bedrock Agents and foundation models (FMs) available in Amazon Bedrock.
How Thomson Reuters Labs achieved AI/ML innovation at pace with AWS MLOps services
In this post, we show you how Thomson Reuters Labs (TR Labs) was able to develop an efficient, flexible, and powerful MLOps process by adopting a standardized MLOps framework that uses AWS SageMaker, SageMaker Experiments, SageMaker Model Registry, and SageMaker Pipelines. The goal being to accelerate how quickly teams can experiment and innovate using AI and machine learning (ML)—whether using natural language processing (NLP), generative AI, or other techniques. We discuss how this has helped decrease the time to market for fresh ideas and helped build a cost-efficient machine learning lifecycle.
Implementing tenant isolation using Agents for Amazon Bedrock in a multi-tenant environment
In this blog post, we will show you how to implement tenant isolation using Amazon Bedrock agents within a multi-tenant environment. We’ll demonstrate this using a sample multi-tenant e-commerce application that provides a service for various tenants to create online stores. This application will use Amazon Bedrock agents to develop an AI assistant or chatbot capable of providing tenant-specific information, such as return policies and user-specific information like order counts and status updates.
Analyze customer reviews using Amazon Bedrock
This post explores an innovative application of large language models (LLMs) to automate the process of customer review analysis. LLMs are a type of foundation model (FM) that have been pre-trained on vast amounts of text data. This post discusses how LLMs can be accessed through Amazon Bedrock to build a generative AI solution that automatically summarizes key information, recognizes the customer sentiment, and generates actionable insights from customer reviews. This method shows significant promise in saving human analysts time while producing high-quality results. We examine the approach in detail, provide examples, highlight key benefits and limitations, and discuss future opportunities for more advanced product review summarization through generative AI.
Harness the power of AI and ML using Splunk and Amazon SageMaker Canvas
For organizations looking beyond the use of out-of-the-box Splunk AI/ML features, this post explores how Amazon SageMaker Canvas, a no-code ML development service, can be used in conjunction with data collected in Splunk to drive actionable insights. We also demonstrate how to use the generative AI capabilities of SageMaker Canvas to speed up your data exploration and help you build better ML models.
How Twilio generated SQL using Looker Modeling Language data with Amazon Bedrock
As one of the largest AWS customers, Twilio engages with data, artificial intelligence (AI), and machine learning (ML) services to run their daily workloads. This post highlights how Twilio enabled natural language-driven data exploration of business intelligence (BI) data with RAG and Amazon Bedrock.
Faster LLMs with speculative decoding and AWS Inferentia2
In recent years, we have seen a big increase in the size of large language models (LLMs) used to solve natural language processing (NLP) tasks such as question answering and text summarization. Larger models with more parameters, which are in the order of hundreds of billions at the time of writing, tend to produce better […]
Use the ApplyGuardrail API with long-context inputs and streaming outputs in Amazon Bedrock
As generative artificial intelligence (AI) applications become more prevalent, maintaining responsible AI principles becomes essential. Without proper safeguards, large language models (LLMs) can potentially generate harmful, biased, or inappropriate content, posing risks to individuals and organizations. Applying guardrails helps mitigate these risks by enforcing policies and guidelines that align with ethical principles and legal requirements.Amazon […]
Monks boosts processing speed by four times for real-time diffusion AI image generation using Amazon SageMaker and AWS Inferentia2
This post is co-written with Benjamin Moody from Monks. Monks is the global, purely digital, unitary operating brand of S4Capital plc. With a legacy of innovation and specialized expertise, Monks combines an extraordinary range of global marketing and technology services to accelerate business possibilities and redefine how brands and businesses interact with the world. Its […]
Evaluate conversational AI agents with Amazon Bedrock
As conversational artificial intelligence (AI) agents gain traction across industries, providing reliability and consistency is crucial for delivering seamless and trustworthy user experiences. However, the dynamic and conversational nature of these interactions makes traditional testing and evaluation methods challenging. Conversational AI agents also encompass multiple layers, from Retrieval Augmented Generation (RAG) to function-calling mechanisms that […]