diff --git a/docs/examples/aws_example.mdx b/docs/examples/aws_example.mdx index a9a5047378..57c45ab4c8 100644 --- a/docs/examples/aws_example.mdx +++ b/docs/examples/aws_example.mdx @@ -1,17 +1,17 @@ --- -title: AWS Bedrock and AOSS +title: Amazon Stack: AWS Bedrock, AOSS, and Neptune Analytics --- -This example demonstrates how to configure and use the `mem0ai` SDK with **AWS Bedrock** and **OpenSearch Service (AOSS)** for persistent memory capabilities in Python. +This example demonstrates how to configure and use the `mem0ai` SDK with **AWS Bedrock**, **OpenSearch Service (AOSS)**, and **AWS Neptune Analytics** for persistent memory capabilities in Python. ## Installation -Install the required dependencies: +Install the required dependencies to include the Amazon data stack, including **boto3**, **opensearch-py**, and **langchain-aws**: ```bash -pip install mem0ai boto3 opensearch-py +pip install "mem0ai[graph,extras]" ``` ## Environment Setup @@ -34,11 +34,15 @@ print(os.environ['AWS_SECRET_ACCESS_KEY']) ## Configuration and Usage -This sets up Mem0 with AWS Bedrock for embeddings and LLM, and OpenSearch as the vector store. +This sets up Mem0 with: +- [AWS Bedrock for LLM](https://docs.mem0.ai/components/llms/models/aws_bedrock) +- [AWS Bedrock for embeddings](https://docs.mem0.ai/components/embedders/models/aws_bedrock#aws-bedrock) +- [OpenSearch as the vector store](https://docs.mem0.ai/components/vectordbs/dbs/opensearch) +- [Neptune Analytics as your graph store](https://docs.mem0.ai/open-source/graph_memory/overview#initialize-neptune-analytics). ```python import boto3 -from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth +from opensearchpy import RequestsHttpConnection, AWSV4SignerAuth from mem0.memory.main import Memory region = 'us-west-2' @@ -56,7 +60,7 @@ config = { "llm": { "provider": "aws_bedrock", "config": { - "model": "anthropic.claude-3-5-haiku-20241022-v1:0", + "model": "us.anthropic.claude-3-7-sonnet-20250219-v1:0", "temperature": 0.1, "max_tokens": 2000 } @@ -68,21 +72,29 @@ config = { "host": "your-opensearch-domain.us-west-2.es.amazonaws.com", "port": 443, "http_auth": auth, - "embedding_model_dims": 1024, "connection_class": RequestsHttpConnection, "pool_maxsize": 20, "use_ssl": True, - "verify_certs": True + "verify_certs": True, + "embedding_model_dims": 1024, } - } + }, + "graph_store": { + "provider": "neptune", + "config": { + "endpoint": f"neptune-graph://my-graph-identifier", + }, + }, } -# Initialize memory system +# Initialize the memory system m = Memory.from_config(config) ``` ## Usage +Reference [Notebook example](https://github.com/mem0ai/mem0/blob/main/examples/graph-db-demo/neptune-example.ipynb) + #### Add a memory: ```python @@ -117,4 +129,4 @@ memory = m.get(memory_id) ## Conclusion -With Mem0 and AWS services like Bedrock and OpenSearch, you can build intelligent AI companions that remember, adapt, and personalize their responses over time. This makes them ideal for long-term assistants, tutors, or support bots with persistent memory and natural conversation abilities. +With Mem0 and AWS services like Bedrock, OpenSearch, and Neptune Analytics, you can build intelligent AI companions that remember, adapt, and personalize their responses over time. This makes them ideal for long-term assistants, tutors, or support bots with persistent memory and natural conversation abilities. diff --git a/docs/open-source/graph_memory/overview.mdx b/docs/open-source/graph_memory/overview.mdx index 3e253bb2e4..2c78e27fb2 100644 --- a/docs/open-source/graph_memory/overview.mdx +++ b/docs/open-source/graph_memory/overview.mdx @@ -236,6 +236,8 @@ m = Memory.from_config(config_dict=config) Mem0 now supports Amazon Neptune Analytics as a graph store provider. This integration allows you to use Neptune Analytics for storing and querying graph-based memories. +You can use Neptune Analytics as part of an Amazon tech stack [Setup AWS Bedrock, AOSS, and Neptune](https://docs.mem0.ai/examples/aws_example#aws-bedrock-and-aoss) + #### Instance Setup Create an Amazon Neptune Analytics instance in your AWS account following the [AWS documentation](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/get-started.html). @@ -266,12 +268,8 @@ The Neptune memory store uses AWS LangChain Python API to connect to Neptune ins ```python Python from mem0 import Memory -# This example must connect to a neptune-graph instance with 1536 vector dimensions specified. +# Provided neptune-graph instance must have the same vector dimensions as the embedder provider. config = { - "embedder": { - "provider": "openai", - "config": {"model": "text-embedding-3-large", "embedding_dims": 1536}, - }, "graph_store": { "provider": "neptune", "config": { diff --git a/examples/graph-db-demo/neptune-example-visualization-1.png b/examples/graph-db-demo/neptune-example-visualization-1.png new file mode 100644 index 0000000000..5ebd238009 Binary files /dev/null and b/examples/graph-db-demo/neptune-example-visualization-1.png differ diff --git a/examples/graph-db-demo/neptune-example-visualization-2.png b/examples/graph-db-demo/neptune-example-visualization-2.png new file mode 100644 index 0000000000..e2ed9b23bb Binary files /dev/null and b/examples/graph-db-demo/neptune-example-visualization-2.png differ diff --git a/examples/graph-db-demo/neptune-example-visualization-3.png b/examples/graph-db-demo/neptune-example-visualization-3.png new file mode 100644 index 0000000000..c22eecf0dc Binary files /dev/null and b/examples/graph-db-demo/neptune-example-visualization-3.png differ diff --git a/examples/graph-db-demo/neptune-example-visualization-4.png b/examples/graph-db-demo/neptune-example-visualization-4.png new file mode 100644 index 0000000000..74ce6a6ff1 Binary files /dev/null and b/examples/graph-db-demo/neptune-example-visualization-4.png differ diff --git a/examples/graph-db-demo/neptune-example.ipynb b/examples/graph-db-demo/neptune-example.ipynb index 7cf1181b6a..acbbbdc444 100644 --- a/examples/graph-db-demo/neptune-example.ipynb +++ b/examples/graph-db-demo/neptune-example.ipynb @@ -21,58 +21,59 @@ "\n", "### 1. Install Mem0 with Graph Memory support \n", "\n", - "To use Mem0 with Graph Memory support, install it using pip:\n", + "To use Mem0 with Graph Memory support (as well as other Amazon services), use pip install:\n", "\n", "```bash\n", - "pip install \"mem0ai[graph]\"\n", + "pip install \"mem0ai[graph,extras]\"\n", "```\n", "\n", - "This command installs Mem0 along with the necessary dependencies for graph functionality.\n", + "This command installs Mem0 along with the necessary dependencies for graph functionality (`graph`) and other Amazon dependencies (`extras`).\n", "\n", - "### 2. Connect to Neptune\n", + "### 2. Connect to Amazon services\n", "\n", - "To connect to Amazon Neptune Analytics, you need to configure Neptune with your Amazon profile credentials. The best way to do this is to declare environment variables with IAM permission to your Neptune Analytics instance. The `graph-identifier` for the instance to persist memories needs to be defined in the Mem0 configuration under `\"graph_store\"`, with the `\"neptune\"` provider. Note that the Neptune Analytics instance needs to have `vector-search-configuration` defined to meet the needs of the llm model's vector dimensions, see: https://docs.aws.amazon.com/neptune-analytics/latest/userguide/vector-index.html.\n", + "For this sample notebook, configure `mem0ai` with [Amazon Neptune Analytics](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/what-is-neptune-analytics.html) as the graph store, [Amazon OpenSearch Serverless](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-overview.html) as the vector store, and [Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) for generating embeddings.\n", + "\n", + "Use the following guide for setup details: [Setup AWS Bedrock, AOSS, and Neptune](https://docs.mem0.ai/examples/aws_example#aws-bedrock-and-aoss)\n", + "\n", + "Your configuration should look similar to:\n", "\n", "```python\n", - "embedding_dimensions = 1536\n", - "graph_identifier = \"\" # graph with 1536 dimensions for vector search\n", "config = {\n", " \"embedder\": {\n", - " \"provider\": \"openai\",\n", + " \"provider\": \"aws_bedrock\",\n", " \"config\": {\n", - " \"model\": \"text-embedding-3-large\",\n", - " \"embedding_dims\": embedding_dimensions\n", - " },\n", + " \"model\": \"amazon.titan-embed-text-v2:0\"\n", + " }\n", + " },\n", + " \"llm\": {\n", + " \"provider\": \"aws_bedrock\",\n", + " \"config\": {\n", + " \"model\": \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\",\n", + " \"temperature\": 0.1,\n", + " \"max_tokens\": 2000\n", + " }\n", + " },\n", + " \"vector_store\": {\n", + " \"provider\": \"opensearch\",\n", + " \"config\": {\n", + " \"collection_name\": \"mem0\",\n", + " \"host\": \"your-opensearch-domain.us-west-2.es.amazonaws.com\",\n", + " \"port\": 443,\n", + " \"http_auth\": auth,\n", + " \"connection_class\": RequestsHttpConnection,\n", + " \"pool_maxsize\": 20,\n", + " \"use_ssl\": True,\n", + " \"verify_certs\": True,\n", + " \"embedding_model_dims\": 1024,\n", + " }\n", " },\n", " \"graph_store\": {\n", " \"provider\": \"neptune\",\n", " \"config\": {\n", - " \"endpoint\": f\"neptune-graph://{graph_identifier}\",\n", + " \"endpoint\": f\"neptune-graph://my-graph-identifier\",\n", " },\n", " },\n", "}\n", - "```\n", - "\n", - "### 3. Configure OpenSearch\n", - "\n", - "We're going to use OpenSearch as our vector store. You can run [OpenSearch from docker image](https://docs.opensearch.org/docs/latest/install-and-configure/install-opensearch/docker/):\n", - "\n", - "```bash\n", - "docker pull opensearchproject/opensearch:2\n", - "```\n", - "\n", - "And verify that it's running with a ``:\n", - "\n", - "```bash\n", - " docker run -d -p 9200:9200 -p 9600:9600 -e \"discovery.type=single-node\" -e \"OPENSEARCH_INITIAL_ADMIN_PASSWORD=\" opensearchproject/opensearch:latest\n", - "\n", - " curl https://localhost:9200 -ku admin:\n", - "```\n", - "\n", - "We're going to connect [OpenSearch using the python client](https://github.com/opensearch-project/opensearch-py):\n", - "\n", - "```bash\n", - "pip install \"opensearch-py\"\n", "```" ] }, @@ -80,24 +81,24 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Configuration\n", + "## Setup\n", "\n", - "Do all the imports and configure OpenAI (enter your OpenAI API key):" + "Import all packages and setup logging" ] }, { "cell_type": "code", - "metadata": { - "ExecuteTime": { - "end_time": "2025-07-03T20:52:48.330121Z", - "start_time": "2025-07-03T20:52:47.092369Z" - } - }, + "metadata": {}, "source": [ "from mem0 import Memory\n", "import os\n", "import logging\n", "import sys\n", + "import boto3\n", + "from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth\n", + "from dotenv import load_dotenv\n", + "\n", + "load_dotenv()\n", "\n", "logging.getLogger(\"mem0.graphs.neptune.main\").setLevel(logging.DEBUG)\n", "logging.getLogger(\"mem0.graphs.neptune.base\").setLevel(logging.DEBUG)\n", @@ -111,57 +112,73 @@ ")" ], "outputs": [], - "execution_count": 1 + "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Setup the Mem0 configuration using:\n", - "- openai as the embedder\n", + "- Amazon Bedrock as the embedder\n", "- Amazon Neptune Analytics instance as a graph store\n", "- OpenSearch as the vector store" ] }, { "cell_type": "code", - "metadata": { - "ExecuteTime": { - "end_time": "2025-07-03T20:52:50.958741Z", - "start_time": "2025-07-03T20:52:50.955127Z" - } - }, + "metadata": {}, "source": [ + "bedrock_embedder_model = \"amazon.titan-embed-text-v2:0\"\n", + "bedrock_llm_model = \"us.anthropic.claude-3-7-sonnet-20250219-v1:0\"\n", + "embedding_model_dims = 1024\n", + "\n", "graph_identifier = os.environ.get(\"GRAPH_ID\")\n", - "opensearch_username = os.environ.get(\"OS_USERNAME\")\n", - "opensearch_password = os.environ.get(\"OS_PASSWORD\")\n", + "\n", + "opensearch_host = os.environ.get(\"OS_HOST\")\n", + "opensearch_post = os.environ.get(\"OS_PORT\")\n", + "\n", + "credentials = boto3.Session().get_credentials()\n", + "region = os.environ.get(\"AWS_REGION\")\n", + "auth = AWSV4SignerAuth(credentials, region)\n", + "\n", "config = {\n", " \"embedder\": {\n", - " \"provider\": \"openai\",\n", - " \"config\": {\"model\": \"text-embedding-3-large\", \"embedding_dims\": 1536},\n", + " \"provider\": \"aws_bedrock\",\n", + " \"config\": {\n", + " \"model\": bedrock_embedder_model,\n", + " }\n", " },\n", - " \"graph_store\": {\n", - " \"provider\": \"neptune\",\n", + " \"llm\": {\n", + " \"provider\": \"aws_bedrock\",\n", " \"config\": {\n", - " \"endpoint\": f\"neptune-graph://{graph_identifier}\",\n", - " },\n", + " \"model\": bedrock_llm_model,\n", + " \"temperature\": 0.1,\n", + " \"max_tokens\": 2000\n", + " }\n", " },\n", " \"vector_store\": {\n", " \"provider\": \"opensearch\",\n", " \"config\": {\n", - " \"collection_name\": \"vector_store\",\n", - " \"host\": \"localhost\",\n", - " \"port\": 9200,\n", - " \"user\": opensearch_username,\n", - " \"password\": opensearch_password,\n", - " \"use_ssl\": False,\n", - " \"verify_certs\": False,\n", + " \"collection_name\": \"mem0ai_vector_store\",\n", + " \"host\": opensearch_host,\n", + " \"port\": opensearch_post,\n", + " \"http_auth\": auth,\n", + " \"embedding_model_dims\": embedding_model_dims,\n", + " \"use_ssl\": True,\n", + " \"verify_certs\": True,\n", + " \"connection_class\": RequestsHttpConnection,\n", + " },\n", + " },\n", + " \"graph_store\": {\n", + " \"provider\": \"neptune\",\n", + " \"config\": {\n", + " \"endpoint\": f\"neptune-graph://{graph_identifier}\",\n", " },\n", " },\n", "}" ], "outputs": [], - "execution_count": 2 + "execution_count": null }, { "cell_type": "markdown", @@ -174,12 +191,7 @@ }, { "cell_type": "code", - "metadata": { - "ExecuteTime": { - "end_time": "2025-07-03T20:52:55.655673Z", - "start_time": "2025-07-03T20:52:54.141041Z" - } - }, + "metadata": {}, "source": [ "m = Memory.from_config(config_dict=config)\n", "\n", @@ -188,31 +200,8 @@ "\n", "m.delete_all(user_id=user_id)" ], - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "WARNING - Creating index vector_store, it might take 1-2 minutes...\n", - "WARNING - Creating index mem0migrations, it might take 1-2 minutes...\n", - "DEBUG - delete_all query=\n", - " MATCH (n {user_id: $user_id})\n", - " DETACH DELETE n\n", - " \n" - ] - }, - { - "data": { - "text/plain": [ - "{'message': 'Memories deleted successfully!'}" - ] - }, - "execution_count": 3, - "metadata": {}, - "output_type": "execute_result" - } - ], - "execution_count": 3 + "outputs": [], + "execution_count": null }, { "cell_type": "markdown", @@ -225,12 +214,7 @@ }, { "cell_type": "code", - "metadata": { - "ExecuteTime": { - "end_time": "2025-07-03T20:53:05.338249Z", - "start_time": "2025-07-03T20:52:57.528210Z" - } - }, + "metadata": {}, "source": [ "messages = [\n", " {\n", @@ -249,120 +233,36 @@ "for e in all_results[\"relations\"]:\n", " print(f\"edge \\\"{e['source']}\\\" --{e['relationship']}--> \\\"{e['target']}\\\"\")" ], - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "DEBUG - Extracted entities: [{'source': 'alice', 'relationship': 'plans_to_watch', 'destination': 'movie'}]\n", - "DEBUG - _search_graph_db\n", - " query=\n", - " MATCH (n )\n", - " WHERE n.user_id = $user_id\n", - " WITH n, $n_embedding as n_embedding\n", - " CALL neptune.algo.vectors.distanceByEmbedding(\n", - " n_embedding,\n", - " n,\n", - " {metric:\"CosineSimilarity\"}\n", - " ) YIELD distance\n", - " WITH n, distance as similarity\n", - " WHERE similarity >= $threshold\n", - " CALL {\n", - " WITH n\n", - " MATCH (n)-[r]->(m) \n", - " RETURN n.name AS source, id(n) AS source_id, type(r) AS relationship, id(r) AS relation_id, m.name AS destination, id(m) AS destination_id\n", - " UNION ALL\n", - " WITH n\n", - " MATCH (m)-[r]->(n) \n", - " RETURN m.name AS source, id(m) AS source_id, type(r) AS relationship, id(r) AS relation_id, n.name AS destination, id(n) AS destination_id\n", - " }\n", - " WITH distinct source, source_id, relationship, relation_id, destination, destination_id, similarity\n", - " RETURN source, source_id, relationship, relation_id, destination, destination_id, similarity\n", - " ORDER BY similarity DESC\n", - " LIMIT $limit\n", - " \n", - "DEBUG - Deleted relationships: []\n", - "DEBUG - _search_source_node\n", - " query=\n", - " MATCH (source_candidate )\n", - " WHERE source_candidate.user_id = $user_id \n", - "\n", - " WITH source_candidate, $source_embedding as v_embedding\n", - " CALL neptune.algo.vectors.distanceByEmbedding(\n", - " v_embedding,\n", - " source_candidate,\n", - " {metric:\"CosineSimilarity\"}\n", - " ) YIELD distance\n", - " WITH source_candidate, distance AS cosine_similarity\n", - " WHERE cosine_similarity >= $threshold\n", - "\n", - " WITH source_candidate, cosine_similarity\n", - " ORDER BY cosine_similarity DESC\n", - " LIMIT 1\n", - "\n", - " RETURN id(source_candidate), cosine_similarity\n", - " \n", - "DEBUG - _search_destination_node\n", - " query=\n", - " MATCH (destination_candidate )\n", - " WHERE destination_candidate.user_id = $user_id\n", - " \n", - " WITH destination_candidate, $destination_embedding as v_embedding\n", - " CALL neptune.algo.vectors.distanceByEmbedding(\n", - " v_embedding,\n", - " destination_candidate, \n", - " {metric:\"CosineSimilarity\"}\n", - " ) YIELD distance\n", - " WITH destination_candidate, distance AS cosine_similarity\n", - " WHERE cosine_similarity >= $threshold\n", - "\n", - " WITH destination_candidate, cosine_similarity\n", - " ORDER BY cosine_similarity DESC\n", - " LIMIT 1\n", - " \n", - " RETURN id(destination_candidate), cosine_similarity\n", - " \n", - "DEBUG - _add_entities:\n", - " destination_node_search_result=[]\n", - " source_node_search_result=[]\n", - " query=\n", - " MERGE (n :`__User__` {name: $source_name, user_id: $user_id})\n", - " ON CREATE SET n.created = timestamp(),\n", - " n.mentions = 1\n", - " \n", - " ON MATCH SET n.mentions = coalesce(n.mentions, 0) + 1\n", - " WITH n, $source_embedding as source_embedding\n", - " CALL neptune.algo.vectors.upsert(n, source_embedding)\n", - " WITH n\n", - " MERGE (m :`entertainment` {name: $dest_name, user_id: $user_id})\n", - " ON CREATE SET m.created = timestamp(),\n", - " m.mentions = 1\n", - " \n", - " ON MATCH SET m.mentions = coalesce(m.mentions, 0) + 1\n", - " WITH n, m, $dest_embedding as dest_embedding\n", - " CALL neptune.algo.vectors.upsert(m, dest_embedding)\n", - " WITH n, m\n", - " MERGE (n)-[rel:plans_to_watch]->(m)\n", - " ON CREATE SET rel.created = timestamp(), rel.mentions = 1\n", - " ON MATCH SET rel.mentions = coalesce(rel.mentions, 0) + 1\n", - " RETURN n.name AS source, type(rel) AS relationship, m.name AS target\n", - " \n", - "DEBUG - Retrieved 1 relationships\n", - "node \"Planning to watch a movie tonight\": [hash: bf55418607cfdca4afa311b5fd8496bd]\n", - "edge \"alice\" --plans_to_watch--> \"movie\"\n" - ] - } - ], - "execution_count": 4 + "outputs": [], + "execution_count": null + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "## Graph Explorer Visualization\n", + "\n", + "You can visualize the graph using a Graph Explorer connection to Neptune Analytics in Neptune Notebooks in the Amazon console. See [Using Amazon Neptune with graph notebooks](https://docs.aws.amazon.com/neptune/latest/userguide/graph-notebooks.html) for instructions on how to setup a Neptune Notebook with Graph Explorer.\n", + "\n", + "Once the graph has been generated, you can open the visualization in the Neptune > Notebooks and click on Actions > Open Graph Explorer. This will automatically connect to your neptune analytics graph that was provided in the notebook setup.\n", + "\n", + "Once in Graph Explorer, visit Open Connections and send all the available nodes and edges to Explorer. Visit Open Graph Explorer to see the nodes and edges in the graph.\n", + "\n", + "### Graph Explorer Visualization Example\n", + "\n", + "_Note that the visualization given below represents only a single example of the possible results generated by the LLM._\n", + "\n", + "Visualization for the relationship:\n", + "```\n", + "\"alice\" --plans_to_watch--> \"movie\"\n", + "```\n", + "\n", + "![neptune-example-visualization-1.png](./neptune-example-visualization-1.png)" + ] }, { "cell_type": "code", - "metadata": { - "ExecuteTime": { - "end_time": "2025-07-03T20:53:17.755933Z", - "start_time": "2025-07-03T20:53:11.568772Z" - } - }, + "metadata": {}, "source": [ "messages = [\n", " {\n", @@ -381,118 +281,30 @@ "for e in all_results[\"relations\"]:\n", " print(f\"edge \\\"{e['source']}\\\" --{e['relationship']}--> \\\"{e['target']}\\\"\")" ], - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "DEBUG - Extracted entities: [{'source': 'thriller_movies', 'relationship': 'is_engaging', 'destination': 'thriller_movies'}]\n", - "DEBUG - _search_graph_db\n", - " query=\n", - " MATCH (n )\n", - " WHERE n.user_id = $user_id\n", - " WITH n, $n_embedding as n_embedding\n", - " CALL neptune.algo.vectors.distanceByEmbedding(\n", - " n_embedding,\n", - " n,\n", - " {metric:\"CosineSimilarity\"}\n", - " ) YIELD distance\n", - " WITH n, distance as similarity\n", - " WHERE similarity >= $threshold\n", - " CALL {\n", - " WITH n\n", - " MATCH (n)-[r]->(m) \n", - " RETURN n.name AS source, id(n) AS source_id, type(r) AS relationship, id(r) AS relation_id, m.name AS destination, id(m) AS destination_id\n", - " UNION ALL\n", - " WITH n\n", - " MATCH (m)-[r]->(n) \n", - " RETURN m.name AS source, id(m) AS source_id, type(r) AS relationship, id(r) AS relation_id, n.name AS destination, id(n) AS destination_id\n", - " }\n", - " WITH distinct source, source_id, relationship, relation_id, destination, destination_id, similarity\n", - " RETURN source, source_id, relationship, relation_id, destination, destination_id, similarity\n", - " ORDER BY similarity DESC\n", - " LIMIT $limit\n", - " \n", - "DEBUG - Deleted relationships: []\n", - "DEBUG - _search_source_node\n", - " query=\n", - " MATCH (source_candidate )\n", - " WHERE source_candidate.user_id = $user_id \n", - "\n", - " WITH source_candidate, $source_embedding as v_embedding\n", - " CALL neptune.algo.vectors.distanceByEmbedding(\n", - " v_embedding,\n", - " source_candidate,\n", - " {metric:\"CosineSimilarity\"}\n", - " ) YIELD distance\n", - " WITH source_candidate, distance AS cosine_similarity\n", - " WHERE cosine_similarity >= $threshold\n", - "\n", - " WITH source_candidate, cosine_similarity\n", - " ORDER BY cosine_similarity DESC\n", - " LIMIT 1\n", - "\n", - " RETURN id(source_candidate), cosine_similarity\n", - " \n", - "DEBUG - _search_destination_node\n", - " query=\n", - " MATCH (destination_candidate )\n", - " WHERE destination_candidate.user_id = $user_id\n", - " \n", - " WITH destination_candidate, $destination_embedding as v_embedding\n", - " CALL neptune.algo.vectors.distanceByEmbedding(\n", - " v_embedding,\n", - " destination_candidate, \n", - " {metric:\"CosineSimilarity\"}\n", - " ) YIELD distance\n", - " WITH destination_candidate, distance AS cosine_similarity\n", - " WHERE cosine_similarity >= $threshold\n", - "\n", - " WITH destination_candidate, cosine_similarity\n", - " ORDER BY cosine_similarity DESC\n", - " LIMIT 1\n", - " \n", - " RETURN id(destination_candidate), cosine_similarity\n", - " \n", - "DEBUG - _add_entities:\n", - " destination_node_search_result=[{'id(destination_candidate)': '67c49d52-e305-47fe-9fce-2cd5adc5d83c0', 'cosine_similarity': 0.999999}]\n", - " source_node_search_result=[{'id(source_candidate)': '67c49d52-e305-47fe-9fce-2cd5adc5d83c0', 'cosine_similarity': 0.999999}]\n", - " query=\n", - " MATCH (source)\n", - " WHERE id(source) = $source_id\n", - " SET source.mentions = coalesce(source.mentions, 0) + 1\n", - " WITH source\n", - " MATCH (destination)\n", - " WHERE id(destination) = $destination_id\n", - " SET destination.mentions = coalesce(destination.mentions) + 1\n", - " MERGE (source)-[r:is_engaging]->(destination)\n", - " ON CREATE SET \n", - " r.created_at = timestamp(),\n", - " r.updated_at = timestamp(),\n", - " r.mentions = 1\n", - " ON MATCH SET r.mentions = coalesce(r.mentions, 0) + 1\n", - " RETURN source.name AS source, type(r) AS relationship, destination.name AS target\n", - " \n", - "DEBUG - Retrieved 3 relationships\n", - "node \"Planning to watch a movie tonight\": [hash: bf55418607cfdca4afa311b5fd8496bd]\n", - "edge \"thriller_movies\" --is_a_type_of--> \"movie\"\n", - "edge \"alice\" --plans_to_watch--> \"movie\"\n", - "edge \"thriller_movies\" --is_engaging--> \"thriller_movies\"\n" - ] - } - ], - "execution_count": 6 + "outputs": [], + "execution_count": null + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "### Graph Explorer Visualization Example\n", + "\n", + "_Note that the visualization given below represents only a single example of the possible results generated by the LLM._\n", + "\n", + "Visualization for the relationship:\n", + "```\n", + "\"alice\" --plans_to_watch--> \"movie\"\n", + "\"thriller\" --type_of--> \"movie\"\n", + "\"movie\" --can_be--> \"engaging\"\n", + "```\n", + "\n", + "![neptune-example-visualization-2.png](./neptune-example-visualization-2.png)" + ] }, { "cell_type": "code", - "metadata": { - "jupyter": { - "is_executing": true - }, - "ExecuteTime": { - "start_time": "2025-07-03T20:53:17.775656Z" - } - }, + "metadata": {}, "source": [ "messages = [\n", " {\n", @@ -514,6 +326,26 @@ "outputs": [], "execution_count": null }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "### Graph Explorer Visualization Example\n", + "\n", + "_Note that the visualization given below represents only a single example of the possible results generated by the LLM._\n", + "\n", + "Visualization for the relationship:\n", + "```\n", + "\"alice\" --dislikes--> \"thriller_movies\"\n", + "\"alice\" --loves--> \"sci-fi_movies\"\n", + "\"alice\" --plans_to_watch--> \"movie\"\n", + "\"thriller\" --type_of--> \"movie\"\n", + "\"movie\" --can_be--> \"engaging\"\n", + "```\n", + "\n", + "![neptune-example-visualization-3.png](./neptune-example-visualization-3.png)" + ] + }, { "cell_type": "code", "metadata": {}, @@ -538,11 +370,36 @@ "outputs": [], "execution_count": null }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "### Graph Explorer Visualization Example\n", + "\n", + "_Note that the visualization given below represents only a single example of the possible results generated by the LLM._\n", + "\n", + "Visualization for the relationship:\n", + "```\n", + "\"alice\" --recommends--> \"sci-fi\"\n", + "\"alice\" --dislikes--> \"thriller_movies\"\n", + "\"alice\" --loves--> \"sci-fi_movies\"\n", + "\"alice\" --plans_to_watch--> \"movie\"\n", + "\"alice\" --avoids--> \"thriller\"\n", + "\"thriller\" --type_of--> \"movie\"\n", + "\"movie\" --can_be--> \"engaging\"\n", + "\"sci-fi\" --type_of--> \"movie\"\n", + "```\n", + "\n", + "![neptune-example-visualization-4.png](./neptune-example-visualization-4.png)" + ] + }, { "cell_type": "markdown", "metadata": {}, "source": [ - "## Search memories" + "## Search memories\n", + "\n", + "Search all memories for \"what does alice love?\". Since \"alice\" the user, this will search for a relationship that fits the users love of \"sci-fi\" movies and dislike of \"thriller\" movies." ] }, { @@ -562,11 +419,20 @@ "cell_type": "code", "metadata": {}, "source": [ - "m.delete_all(\"user_id\")\n", + "m.delete_all(user_id)\n", "m.reset()" ], "outputs": [], "execution_count": null + }, + { + "metadata": {}, + "cell_type": "markdown", + "source": [ + "## Conclusion\n", + "\n", + "In this example we demonstrated how an AWS tech stack can be used to store and retrieve memory context. Bedrock LLM models can be used to interpret given conversations. OpenSearch can store text chunks with vector embeddings. Neptune Analytics can store the text chunks in a graph format with relationship entities." + ] } ], "metadata": {