Overview
This solution provides a robust, Retrieval-Augmented Generation (RAG) system on AWS, designed to support a web interface where users can ask questions based on internal company data from Slack, Confluence, and Jira. The architecture starts with Amazon S3 to store data snapshots from Slack messages, Confluence articles, and Jira tickets. AWS Lambda functions periodically pull and process data updates from these platforms using their respective APIs, ensuring the data remains current. The information is then indexed in LanceDB, a serverless vector store for data and AI workloads providing a powerful search layer optimized for fast query retrieval.
When a user enters a question through the web interface (hosted in Fargate), the query is sent to an Amazon Bedrock embedding API designed to interpret the query and vectorize the query to do both vector search and full text search and retrieve the most relevant data using this hybrid approach. Using LanceDB, the model retrieves contextually relevant snippets, combines them, and generates a coherent response, which is returned to the user.
Together, this solution enables insights by combining company data with advanced machine learning, giving users immediate answers based on aggregated knowledge from multiple communication and documentation platforms.
Highlights
- GenAI-Powered: We tap into the power of Amazon Bedrock to maximize the quality, usability, and value of the data in your documents.
- Scalable: Our solution leverages modern cloud architecture best practices to create a document processing platform that is nearly infinite in its scalability.
- Centralization: Provides a single source for users to find answers to their questions instead of searching multiple information sources.
Details
Pricing
Custom pricing options
Legal
Content disclaimer
Support
Vendor support
For any questions about this offering or what Protagona can do for you, please reach out to us and we'll get you taken care of: