Amazon Bedrock Evaluations
With Amazon Bedrock evaluations, you can evaluate foundation models, including custom and imported models, to find models that fit your needs. You can also evaluate your retrieval or end-to-end RAG workflow in Amazon Bedrock Knowledge Bases.Overview
Amazon Bedrock provides evaluation tools for you to accelerate adoption of generative AI applications. Evaluate, compare, and select the foundation model for your use case with Model Evaluation. Prepare your RAG applications built on Amazon Bedrock Knowledge Bases for production by evaluating the retrieve or retrieve and generate functions.
Evaluation types
Evaluate your end-to-end RAG workflow in Amazon Bedrock Knowledge Bases
Use retrieve and generate evaluations to evaluate the end-to-end retrieval-augmented generation (RAG) capability of your application. Ensure the generated content is correct, complete, limits hallucinations, and adheres to responsible AI principles. Simply select a content-generating model and an LLM to use as a judge with your Amazon Bedrock Knowledge Bases, upload your custom prompt dataset, and select the metrics most important for your evaluation.
Ensure complete and relevant retrieval from Amazon Bedrock Knowledge Bases
Use retrieve evaluations in Amazon Bedrock Knowledge Bases evaluations to evaluate the storage and retrieval settings of your Amazon Bedrock Knowledge Bases. Ensure the retrieved content is relevant and covers the entire user query. Simply select a Knowledge Base and an LLM to use as a judge, upload your custom prompt dataset, and select the metrics most important for your evaluation.
Evaluate FMs to select the best one for your use case
Amazon Bedrock Model Evaluation allows you to use automatic and human evaluations to select FMs for a specific use case. Automatic (Programmatic) model evaluation uses curated and custom datasets and provides predefined metrics including accuracy, robustness, and toxicity. For subjective metrics, you can use Amazon Bedrock to set up a human evaluation workflow in a few quick steps. With human evaluations, you can bring your own datasets and define custom metrics, such as relevance, style, and alignment to brand voice. Human evaluation workflows can use your own employees as reviewers or you can engage a team managed by AWS to perform the human evaluation, where AWS hires skilled evaluators and manages the complete workflow on your behalf. You can also use an LLM-as-a-Judge to provide high quality evaluations on your dataset with metrics such as correctness, completeness, faithfulness (hallucination), as well as responsible AI metrics such as answer refusal and harmfulness.
Compare results across multiple evaluation jobs to make decisions faster
Use the compare feature in evaluations to see the results of any changes you made to your prompts, the models being evaluated, or the Knowledge Bases in your RAG system.