A lightweight system for giving Claude persistent memory across conversations, built entirely from free-tier Google tools and Claude's MCP integrations.
Claude cannot natively retain context between sessions. This architecture compensates by encoding session history into an external queryable store (NotebookLM) and instructing Claude to retrieve it at the start of every session. No custom infrastructure, API keys, or paid services are required beyond what is already available.
Four layers work together:
- Claude Project Store — static reference documents that Claude reads automatically at session start via RAG.
- Google Drive — the primary writable store: session transcripts are dropped here, and five Google Docs accumulate session history.
- NotebookLM — two notebooks index the Google Docs and give Claude a semantic query interface over session history via MCP.
- Google Colab + Gemini — a pipeline script that processes raw transcripts: archives them, summarizes via Gemini, extracts topic tags, parses directives, and writes to the Google Docs.
After each session, you export the transcript, run the Colab pipeline, then run a local refresh script to sync NotebookLM. The next session, Claude picks up where you left off.
See docs/system-overview.md for full architecture details.
claude-persistent-memory/
├── README.md ← this file; setup guide
├── docs/
│ ├── system-overview.md ← full architecture reference
│ ├── directive-system.md ← %%DIRECTIVE: syntax and conventions
│ └── README.md ← the README that goes into your Claude Project store
└── scripts/
├── gemini_summarizer.ipynb ← Colab pipeline notebook
└── notebooklm_refresh.py ← local post-processing script
- A Claude account (free tier or subscription)
- A Google account with access to Google Drive, Google Colab, and NotebookLM
- The NotebookLM MCP server installed and connected in your Claude settings
- The AI Chat Exporter browser extension for exporting session transcripts
- Python 3.x with
notebooklm-pyinstalled locally (pip install notebooklm-py)
In your Google Drive (My Drive), create the following:
My Drive/
├── claude_logs/
│ └── log_archive/
└── Colab Notebooks/
The pipeline writes session history to five Google Docs inside claude_logs/. Create them manually now so they are ready to add as NotebookLM sources in the next step.
In Google Drive, create the following blank Google Docs inside the claude_logs/ folder, using exactly these names:
Claude Session SummariesClaude Raw Session TranscriptsClaude Session IndexClaude Latest SessionClaude Session Notes
The pipeline will append to these docs each time it runs. You do not need to add any content to them now.
Upload scripts/gemini_summarizer.ipynb to your Colab Notebooks/ folder in Drive, or open it directly in Colab from the repository and save a copy to Drive.
Go to notebooklm.google.com and create two notebooks:
- Session Summaries — this is the primary notebook Claude queries at every session start
- Raw Session Transcripts — used only for deep queries when summaries are insufficient
Note the ID of each notebook from its URL:
https://notebooklm.google.com/notebook/<NOTEBOOK_ID>
Add the Google Docs you created in Step 2 as live sources in each notebook:
Session Summaries notebook — add four docs:
- Claude Session Summaries
- Claude Session Index
- Claude Latest Session
- Claude Session Notes
Raw Session Transcripts notebook — add one doc:
- Claude Raw Session Transcripts
To add a Google Doc as a source: open the notebook → Add source → Google Drive → select the doc.
scripts/notebooklm_refresh.py — replace the two placeholder notebook IDs in the NOTEBOOKS config block with your actual IDs from Step 4:
NOTEBOOKS = [
{"name": "Session Summaries", "id": "YOUR_SESSION_SUMMARIES_NOTEBOOK_ID"},
{"name": "Raw Session Transcripts", "id": "YOUR_RAW_TRANSCRIPTS_NOTEBOOK_ID"},
]docs/claude-project-readme.md — replace the two placeholder notebook IDs:
- **Notebook ID:** `YOUR_SESSION_SUMMARIES_NOTEBOOK_ID`
- **Notebook ID:** `YOUR_RAW_TRANSCRIPTS_NOTEBOOK_ID`
-
In Claude, create a new Project.
-
In the Project's system prompt, paste the following instruction (replacing the placeholder with your actual Session Summaries notebook ID):
Before producing any response in this project, you must first query the Session Summaries NotebookLM notebook (ID:
YOUR_SESSION_SUMMARIES_NOTEBOOK_ID) for the most recent session context. Use the query: "What does the Claude Latest Session document say? Return the filename, timestamp, tags, and summary." Do not respond to the user's opening statement until this query has completed and its result is in your context. If the MCP tool is unavailable, stop and inform the user rather than proceeding without context. This applies to every session, including sessions where the opening statement appears self-contained or simple. -
Upload the following files from this repository to the Project's file store. The filenames shown here are for reference — Claude identifies documents by content, not filename, so the names used when uploading don't need to match exactly:
docs/claude-project-readme.mddocs/system-overview.mddocs/directive-system.md
Run the one-time authentication step for notebooklm-py:
notebooklm loginThis stores credentials locally. Subsequent runs of notebooklm_refresh.py will reuse them.
Once set up, the workflow for each session is:
- Start a session in your Claude Project. Claude will query NotebookLM for prior context automatically.
- Work normally. Claude can emit
%%DIRECTIVE:lines to persist notes and preferences across sessions (seedocs/directive-system.md). - Export the transcript using the AI Chat Exporter browser extension. Save the
.mdfile toclaude_logs/in Drive. - Run the Colab pipeline (
gemini_summarizer.ipynb). It will archive the transcript, summarize it, and write to all five Google Docs. - Run the refresh script locally:
This syncs NotebookLM with the updated Google Docs. The next session will pick up the new context.
python notebooklm_refresh.py
MIT