AWS Database Blog

Using knowledge graphs to build GraphRAG applications with Amazon Bedrock and Amazon Neptune

Retrieval Augmented Generation (RAG) is an innovative approach that combines the power of large language models with external knowledge sources, enabling more accurate and informative generation of content. This technique uses the strengths of both components, leveraging the language model’s ability to understand context and generate coherent responses while augmenting it with factual information retrieved from various data sources.

The choice of data source plays a crucial role in the effectiveness of RAG. While structured databases and unstructured text corpora can serve as valuable resources, knowledge graphs stand out as particularly beneficial. Knowledge graphs offer a structured representation of real-world entities and their relationships, supporting efficient retrieval and integration of relevant information.

Using knowledge graphs as sources for RAG (GraphRAG) yields numerous advantages. These knowledge bases encapsulate a vast wealth of curated and interconnected information, enabling the generation of responses that are grounded in factual knowledge. Additionally, the structured nature of knowledge graphs facilitates precise querying and retrieval, verifying that the most pertinent information is incorporated into the generation process. This fusion of language understanding and factual knowledge empowers RAG to produce outputs that are both informative and coherent, making it a powerful tool for applications ranging from question answering to content generation.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

In this post, we show you how to build GraphRAG applications using Amazon Bedrock and Amazon Neptune with LlamaIndex framework.

Solution overview

As a sample solution, we implement GraphRAG over a Customer 360 knowledge graph, which provides customer context for generative artificial intelligence (AI)-powered applications.

The GraphRAG application is orchestrated by LlamaIndex framework, which manages the interaction with Amazon Bedrock and Neptune. Amazon Bedrock provides the communication interface with large language models (LLMs), and Neptune Database stores the knowledge graph used in GraphRAG.

This solution has the following steps:

  • Set up Customer 360 knowledge graph in Neptune
  • Configure Amazon Bedrock with LlamaIndex
  • Integrate Amazon Neptune with LlamaIndex
  • Configure the retriever for Neptune
  • Interact with the knowledge graph

For instructions on how to install and configure LlamaIndex, refer to LlamaIndex Installation and Setup page.

You can run this solution on Neptune graph notebooks. The notebook integrates with Neptune to retrieve data and with Bedrock to reason over input prompt with retrieved context to generate the output.

Scope of Solution

Set up Customer 360 knowledge graph in Neptune

For instructions on how to set up Neptune Database, refer to Setting up Neptune page.

We set up a Customer 360 knowledge graph on Neptune with synthetic data by following the Neptune sample identity graph notebook. The knowledge graph structure connects Phone, Email, Address and Session nodes to User node, the City node is connected to Address node, and Device, IP and Page nodes are connected to Session node.

Configure Amazon Bedrock with LlamaIndex

Configure Amazon Bedrock to integrate with LlamaIndex components. This allows you to implement the LLM in the retrieve and reason steps of the GraphRAG workflow.

In this case, we are using Anthropic Claude 3 Sonnet LLM through Bedrock.

from llama_index.llms.bedrock import Bedrock
from llama_index.core import Settings

llm = Bedrock(model="anthropic.claude-3-sonnet-20240229-v1:0") 
Settings.llm = llm

You can change the LLM by replacing the model parameter of Bedrock class. For example, if you want to define Anthropic Claude 3 Haiku, the value of the model parameter would be anthropic.claude-3-haiku-{model_version}.

For more information about available models, refer to Amazon Bedrock page.

Integrate Amazon Neptune with LlamaIndex

In this section, we integrate the Neptune database with LlamaIndex. This allows LlamaIndex to connect to your Neptune database instance to retrieve information.

You need to create an instance of NeptuneDatabaseGraphStore class to connect to the Neptune database. In the host parameter, you need to supply the Neptune database endpoint—for GraphRAG purposes, the read only endpoint should be used. For the port parameter, you can extract the port from the database endpoint. The node_label parameter represents the label that is associated with the entity nodes from your knowledge graph stored in Neptune to be queried during retrieval.

In this case, we are setting the User node label from the Neptune knowledge graph to be the queried entity node for information retrieval.

from llama_index.core import StorageContext
from llama_index.graph_stores.neptune import NeptuneDatabaseGraphStore

graph_store = NeptuneDatabaseGraphStore(
    host="<NEPTUNE_DB>.<AWS_REGION>.neptune.amazonaws.com", 
    port=8182,
    node_label="User"
)
storage_context = StorageContext.from_defaults(graph_store=graph_store)

Configure the retriever for Neptune

Now set up integration between the LLM and the Neptune database to perform the relevant sub-graph retrieval over the knowledge graph stored in the database.

First, you need to create an instance of KnowledgeGraphRAGRetriever class, which you use to convert a given input prompt to retrieval instructions that will be performed over the Neptune database (storage_context parameter).

The KnowledgeGraphRAGRetriever provides the option to enrich retrieval content by performing natural language input prompt conversion to openCypher query format by setting the with_nl2graphquery parameter as True—this conversion is performed by the LLM configured previously. The graph_traversal_depth parameter represents the retrieved sub-graph information depth—the higher the value, the deeper the knowledge graph information that will be returned.

Additionally, you need to create an instance of RetrieverQueryEngine class to perform natural language prompting over the KnowledgeGraphRAGRetriever instance (retriever parameter).

To increase the accuracy of GraphRAG, we implement prompt engineering over the input prompt to extract information related to the configured retrieval node of the knowledge graph (in this case, the User node). The ENTITY_EXTRACT_PROMPT is passed as a parameter in KnowledgeGraphRAGRetriever class.

In this case, we use the natural language conversion to openCypher query format (NL2GraphQuery) to increase the variety of retrieved information. We implemented prompt engineering on the input prompt to instruct the LLM on good practices of openCypher query language. The NL2CYPHER_PROMPT is passed as a parameter in KnowledgeGraphRAGRetriever class. By using NL2GraphQuery, we assume that the query generated by the LLM can have syntax or semantic errors. To increase the accuracy of the response, we set the response_mode parameter as refine for RetrieverQueryEngine class. This means that the LLM will perform reasoning over both results (NL2GraphQuery and knowledge graph retrieval) independently and generate a single reasoning based on those results. This minimizes the impact of empty or incomplete results from NL2GraphQuery.

Set the retrieved sub-graph depth as three hops, which is the maximum number of hops for the knowledge graph used in this sample.

from llama_index.core.prompts.base import (
    PromptTemplate,
    PromptType,
)
from llama_index.core.query_engine import RetrieverQueryEngine
from llama_index.core.retrievers import KnowledgeGraphRAGRetriever

ENTITY_EXTRACT_TMPL_STR = """
A question is provided below. 
Given the question, extract up to {max_keywords} information that identify a given user in the question. Avoid stopwords.
Focus on extracting complete information from question, it can be more than one single word.
---------------------
{question}
---------------------
Provide information in the following comma-separated format: 'KEYWORDS: <information>'
"""

ENTITY_EXTRACT_PROMPT = PromptTemplate(
    ENTITY_EXTRACT_TMPL_STR,
    prompt_type=PromptType.QUERY_KEYWORD_EXTRACT,
)

AMAZON_NEPTUNE_NL2CYPHER_PROMPT_TMPL_STR = """
Create a **Amazon Neptune flavor Cypher query** based on provided relationship paths and a question.
The query should be able to try best answer the question with the given graph schema.
The query should follow the following guidance:
- Fully qualify property references with the node's label.
```
// Incorrect
MATCH (p:person)-[:follow]->(:person) RETURN p.name
// Correct
MATCH (p:person)-[:follow]->(i:person) RETURN i.name
```
- Strictly follow the relationship on schema:
Given the relationship ['(:`Art`)-[:`BY_ARTIST`]->(:`Artist`)']:
```
// Incorrect
MATCH (a:Artist)-[:BY_ARTIST]->(t:Art)
RETURN DISTINCT t
// Correct
MATCH (a:Art)-[:BY_ARTIST]->(t:Artist)
RETURN DISTINCT t
```
- Follow single direction (from left to right) query model:
```
// Incorrect
MATCH (a:Artist)<-[:BY_ARTIST]-(t:Art)
RETURN DISTINCT t
// Correct
MATCH (a:Art)-[:BY_ARTIST]->(t:Artist)
RETURN DISTINCT t
```
Given any relationship property, you should just use them following the relationship paths provided, respecting the direction of the relationship path.
With these information, construct a Amazon Neptune Cypher query to provide the necessary information for answering the question, only return the plain text query, no explanation, apologies, or other text.
NOTE:
0. Try to get as much graph data as possible to answer the question
1. Put a limit of 30 results in the query.
---
Question: {query_str}
---
Schema: {schema}
---
Amazon Neptune flavor Query:
"""

NL2CYPHER_PROMPT = PromptTemplate(
    AMAZON_NEPTUNE_NL2CYPHER_PROMPT_TMPL_STR,
    prompt_type=PromptType.TEXT_TO_GRAPH_QUERY,
)

graph_rag_retriever = KnowledgeGraphRAGRetriever(
    storage_context=storage_context,
    entity_extract_template=ENTITY_EXTRACT_PROMPT,
    with_nl2graphquery=True,
    graph_query_synthesis_prompt=NL2CYPHER_PROMPT,
    graph_traversal_depth=3
)

query_engine = RetrieverQueryEngine.from_args(
    graph_rag_retriever,
    response_mode="refine"
)

Interact with the knowledge graph

With everything set up, you can now interact with the Amazon Bedrock LLM that will use the retrieved knowledge graph information.

To test the application, we defined a prompt to perform a product recommendation for a given user based on user information retrieved from the knowledge graph. This means that by changing the user identification, the result from the LLM should be different and hyper-personalized.

from IPython.display import display, Markdown

response = query_engine.query(
"""
You are a marketing analyst in a Technology retail company, mainly focused on selling notebooks, smartphones and tablets from popular brands.

You need to create hyper-personalized product recommendation for this customer.

Instructions for using the provided information about customer:

1. You should recommend products similar to device models used by customer

```
// Example
Given the device: Mozilla/5.0 (x11; Linux Amd64) Apple Web Kit/534.36 (khtml, Like Gecko) Chrome/13.0.766.0 Safari/534.36
The device model is: x11; Linux Amd64
```

2. Web browser information is irrelevant
3. Geographic information should be used to personalize your recommendation
4. Ask for confirmation about contact information

Notes:
- Keep communication friendly and focused on recommending products models
- Include technical details about recommended products models
- Avoid mentioning your role
- Avoid mentioning you are performing a personalized recommendation
- Conclude by putting yourself available to support customer and answer questions

Customer: <id>USER_ID</id>
""",
)
display(Markdown(f"<b>{response}</b>"))

The first test asks for a recommendation for a user named “Wallis Lamba” with the following sub-graph information.

Edge Type Content
By Device Mozilla/5.0 (windows Nt 6.0) Apple Web Kit/534.24 (khtml, Like Gecko) Chrome/11.0.696.3 Safari/534.24
By Device Mozilla/5.0 (x11; Net Bsd) Apple Web Kit/537.36 (khtml, Like Gecko) Chrome/27.0.1453.116 Safari/537.36
Linked To B6def854 F605 4986 A364 Bd29cb5cc8be
Linked To 6a62012a 34a7 4b87 Bbe0 3a7919666fe1
Has Phone 305 355 XXX
Joined In 3/18/2021
Has Email Wlambaip@neptunedemos.com
In City Miami
Has Address 82 Westridge Park

The response looks something like the following:

“Hello Wallis,

For your consideration, I would recommend the Microsoft Surface Pro 8 with an Intel Core i7 processor. This tablet runs Windows 11 and offers excellent portability, making it well-suited for the warm climate in Miami. With up to 16 hours of battery life and a vibrant 13″ PixelSense touchscreen display, the Surface Pro 8 is a versatile device for work and entertainment.

Another great option could be the Dell XPS 13 laptop. This sleek and lightweight laptop is available with Windows 10 or Ubuntu Linux and features a stunning 4K InfinityEdge display. The XPS 13 is powered by 11th Gen Intel Core processors and offers impressive performance in a compact form factor.

Please confirm if your contact information – wlambai@neptunedemos.com and 305-355-XXXX is correct. I’m happy to provide more details on these recommended models or explore other options based on your preferences. Let me know if you need an other assistance or have additional questions.”

The second test asks for a recommendation for a user named “Armand Aglione” with the following sub-graph information.

Edge Type Content
By Device Mozilla/5.0 (x11; Open Bsd 1386) Apple Web Kit/537.36 (khtml, Like Gecko) Chrome/36.0.1985.125 Safari/537.36
By Device Mozilla/5.0 (macintosh; Intel Mac Os X 10 6 8) Apple Web Kit/535.1 (khtml, Like Gecko) Chrome/13.0.782.24 Safari/535.1
Linked To 47b43c86 De5d 4494 9ca8 428cbeb7d25e
Linked To E4098368 0c45 4672 8d10 4ee34b3a5740
Has Phone 903 694 XXX
Joined In 2/15/2021
Has Email Aaglione4u@neptunedemos.com
In City Tyler
Has Address 59919 Crest Line Center

The response for Armand will be different than the response for Wallis:

“Hello Armand,

Since you seem to be using a Macintosh device with an Intel processor, I would recommend considering the latest MacBook models from Apple. The new M1 and M2 chip MacBooks offer excellent performance and battery life. For example, the MacBook Air M2 has an 8-core CPU, 8GB unified memory, and up to 18 hours of battery life.

Given your location in Tyler, the iPad lineup could be a great portable option for on-the-go use. The iPad Air or iPad Pro models would complement your MacBook nicely.

Please confirm if the email aaglione4u@neptunedemos.com and phone number 903-694-XXXX are the best ways to reach you. I’m happy to provide more details or recommendations tailored to your needs. Feel free to ask any questions you may have!”

Cleanup

After following along, if the resources you created are no longer needed, please delete them to prevent incurring additional charges.

Conclusion

Implementing GraphRAG offers a powerful approach to using the strengths of both LLMs and structured knowledge bases. Integrating the natural language understanding capabilities of language models with the factual and contextual information contained within knowledge graphs, GraphRAG enables the generation of highly accurate and informative responses. You can use Amazon Bedrock and Amazon Neptune to facilitate the implementation of GraphRAG and increase the capabilities of generative artificial intelligence (AI)-powered applications.

Give it a try and let us know your feedback in the comments section.


About the author

AuthorMatheus Duarte Dias is a Data Architect at Amazon Web Services. He works with customers to deliver business results by implementing Analytics and AI/ML solutions.