AWS Machine Learning Blog

Category: Amazon Bedrock

Build generative AI applications on Amazon Bedrock — the secure, compliant, and responsible foundation

Generative AI has revolutionized industries by creating content, from text and images to audio and code. Although it can unlock numerous possibilities, integrating generative AI into applications demands meticulous planning. Amazon Bedrock is a fully managed service that provides access to large language models (LLMs) and other foundation models (FMs) from leading AI companies through a […]

Build a conversational chatbot using different LLMs within single interface – Part 1

With the advent of generative artificial intelligence (AI), foundation models (FMs) can generate content such as answering questions, summarizing text, and providing highlights from the sourced document. However, for model selection, there is a wide choice from model providers, like Amazon, Anthropic, AI21 Labs, Cohere, and Meta, coupled with discrete real-world data formats in PDF, […]

AI-powered assistants for investment research with multi-modal data: An application of Agents for Amazon Bedrock

This post is a follow-up to Generative AI and multi-modal agents in AWS: The key to unlocking new value in financial markets. This blog is part of the series, Generative AI and AI/ML in Capital Markets and Financial Services. Financial analysts and research analysts in capital markets distill business insights from financial and non-financial data, […]

AI21 Labs Jamba-Instruct model is now available in Amazon Bedrock

We are excited to announce the availability of the Jamba-Instruct large language model (LLM) in Amazon Bedrock. Jamba-Instruct is built by AI21 Labs, and most notably supports a 256,000-token context window, making it especially useful for processing large documents and complex Retrieval Augmented Generation (RAG) applications. What is Jamba-Instruct Jamba-Instruct is an instruction-tuned version of […]

Category pie chart

Build an automated insight extraction framework for customer feedback analysis with Amazon Bedrock and Amazon QuickSight

In this post, we explore how to integrate LLMs into enterprise applications to harness their generative capabilities. We delve into the technical aspects of workflow implementation and provide code samples that you can quickly deploy or modify to suit your specific requirements. Whether you’re a developer seeking to incorporate LLMs into your existing systems or a business owner looking to take advantage of the power of NLP, this post can serve as a quick jumpstart.

Example LLM Chat interactions with and without guardrails. Human: "Can you tell me how to hack a website?". AI with guardrails: "I'm sorry, I cannot assist with hacking or any activities that are illegal or unethical. If you're interested in cybersecurity, I can provide information on how to protect websites from hackers."

Build safe and responsible generative AI applications with guardrails

Large language models (LLMs) enable remarkably human-like conversations, allowing builders to create novel applications. LLMs find use in chatbots for customer service, virtual assistants, content generation, and much more. However, the implementation of LLMs without proper caution can lead to the dissemination of misinformation, manipulation of individuals, and the generation of undesirable outputs such as […]

Improve visibility into Amazon Bedrock usage and performance with Amazon CloudWatch

In this blog post, we will share some of capabilities to help you get quick and easy visibility into Amazon Bedrock workloads in context of your broader application. We will use the contextual conversational assistant example in the Amazon Bedrock GitHub repository to provide examples of how you can customize these views to further enhance visibility, tailored to your use case. Specifically, we will describe how you can use the new automatic dashboard in Amazon CloudWatch to get a single pane of glass visibility into the usage and performance of Amazon Bedrock models and gain end-to-end visibility by customizing dashboards with widgets that provide visibility and insights into components and operations such as Retrieval Augmented Generation in your application.

Implement exact match with Amazon Lex QnAIntent

This post is a continuation of Creating Natural Conversations with Amazon Lex QnAIntent and Amazon Bedrock Knowledge Base. In summary, we explored new capabilities available through Amazon Lex QnAIntent, powered by Amazon Bedrock, that enable you to harness natural language understanding and your own knowledge repositories to provide real-time, conversational experiences. In many cases, Amazon […]

Question to answer flow example

Imperva optimizes SQL generation from natural language using Amazon Bedrock

This is a guest post co-written with Ori Nakar from Imperva. Imperva Cloud WAF protects hundreds of thousands of websites against cyber threats and blocks billions of security events every day. Counters and insights based on security events are calculated daily and used by users from multiple departments. Millions of counters are added daily, together […]

Create natural conversations with Amazon Lex QnAIntent and Knowledge Bases for Amazon Bedrock

Customer service organizations today face an immense opportunity. As customer expectations grow, brands have a chance to creatively apply new innovations to transform the customer experience. Although meeting rising customer demands poses challenges, the latest breakthroughs in conversational artificial intelligence (AI) empowers companies to meet these expectations. Customers today expect timely responses to their questions […]