AWS Database Blog
Category: Artificial Intelligence
Create a 360-degree master data management patient view solution using Amazon Neptune and generative AI
In this post, we explore how you can achieve a patient 360-degree view using Amazon Neptune and generative AI, and use it to strengthen your organization’s research and breakthroughs. By consolidating information from multiple sources such as electronic health records (EHRs), lab reports, prescriptions, and medical histories into a single location, healthcare providers can gain a better understanding of a patient’s health.
How Iterate.ai uses Amazon MemoryDB to accelerate and cost-optimize their workforce management conversational AI agent
Iterate.ai is an enterprise AI platform company delivering innovative AI solutions to industries such as retail, finance, healthcare, and quick-service restaurants. Among its standout offerings is Frontline, a workforce management platform powered by AI, designed to support and empower Frontline workers. Available on both the Apple App Store and Google Play, Frontline uses advanced AI tools to streamline operational efficiency and enhance communication among dispersed workforces. In this post, we give an overview of durable semantic caching in Amazon MemoryDB, and share how Iterate used this functionality to accelerate and cost-optimize Frontline.
Accelerate your generative AI application development with Amazon Bedrock Knowledge Bases Quick Create and Amazon Aurora Serverless
In this post, we look at two capabilities in Amazon Bedrock Knowledge Bases that make it easier to build RAG workflows with Amazon Aurora Serverless v2 as the vector store. The first capability helps you easily create an Aurora Serverless v2 knowledge base to use with Amazon Bedrock and the second capability enables you to automate deploying your RAG workflow across environments.
Amazon DynamoDB data models for generative AI chatbots
Amazon DynamoDB is ideal for storing chat history and metadata due to its scalability and low latency. DynamoDB can efficiently store chat history, allowing quick access to past interactions. User-specific metadata, such as preferences and session information, can be stored to personalize responses and manage active sessions, enhancing the overall chatbot experience.In this post, we explore how to design an optimal schema for chatbots, whether you’re building a small proof of concept application or deploying a large-scale production system.
Build a scalable, context-aware chatbot with Amazon DynamoDB, Amazon Bedrock, and LangChain
Amazon DynamoDB, Amazon Bedrock, and LangChain can provide a powerful combination for building robust, context-aware chatbots. In this post, we explore how to use LangChain with DynamoDB to manage conversation history and integrate it with Amazon Bedrock to deliver intelligent, contextually aware responses. We break down the concepts behind the DynamoDB chat connector in LangChain, discuss the advantages of this approach, and guide you through the essential steps to implement it in your own chatbot.
Use a DAO to govern LLM training data, Part 4: MetaMask authentication
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, focusing on the ingestion of training data. In Part 2, we created and deployed a minimalistic smart contract on the Ethereum Sepolia using Remix and MetaMask, establishing a mechanism to govern which training data can be uploaded to the knowledge base and by whom. In Part 3, we set up Amazon API Gateway and deployed AWS Lambda functions to copy data from InterPlanetary File System (IPFS) to Amazon Simple Storage Service (Amazon S3) and start a knowledge base ingestion job, creating a seamless data flow from IPFS to the knowledge base. In this post, we demonstrate how to configure MetaMask authentication, create a frontend interface, and test the solution.
Use a DAO to govern LLM training data, Part 3: From IPFS to the knowledge base
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, focusing on the ingestion of training data. In Part 2, we created and deployed a minimalistic smart contract on the Ethereum Sepolia testnet using Remix and MetaMask, establishing a mechanism to govern which training data can be uploaded to the knowledge base and by whom. In this post, we set up Amazon API Gateway and deploy AWS Lambda functions to copy data from InterPlanetary File System (IPFS) to Amazon Simple Storage Service (Amazon S3) and start a knowledge base ingestion job.
Use a DAO to govern LLM training data, Part 2: The smart contract
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, specifically focusing on the ingestion of training data. In this post, we focus on the writing and deployment of the Ethereum smart contract that contains the outcome of the DAO decisions.
Use a DAO to govern LLM training data, Part 1: Retrieval Augmented Generation
Blockchain and generative AI are two technical fields that have received a lot of attention in the recent years. There is an emerging set of use cases that can benefit from these two technologies. In this four-part series, we build a solution that governs the training data ingestion process of an AI model, using a smart contract and serverless components. We guide you through the different steps to build the solution. In this post, we review the overall architecture of the solution, and set up a large language model (LLM) knowledge base.
Embed textual data in Amazon RDS for SQL Server using Amazon Bedrock
In Part 1 of this post, we covered how Retrieval Augmented Generation (RAG) can be used to enhance responses in generative AI applications by combining domain-specific information with a foundation model (FM). However, we stayed focused on the semantic search aspect of the solution, assuming that our vector store was already built and fully populated. In this post, we explore how to generate vector embeddings on Wikipedia data stored in a SQL Server database hosted on Amazon RDS. We also use Amazon Bedrock to invoke the appropriate FM APIs and an Amazon SageMaker Jupyter Notebook to help us orchestrate the overall process.