Skip to content

pjmattingly/Claude-persistent-memory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Claude Persistent Memory

A lightweight system for giving Claude persistent memory across conversations, built entirely from free-tier Google tools and Claude's MCP integrations.

Claude cannot natively retain context between sessions. This architecture compensates by encoding session history into an external queryable store (NotebookLM) and instructing Claude to retrieve it at the start of every session. No custom infrastructure, API keys, or paid services are required beyond what is already available.


How it works

Four layers work together:

  1. Claude Project Store — static reference documents that Claude reads automatically at session start via RAG.
  2. Google Drive — the primary writable store: session transcripts are dropped here, and five Google Docs accumulate session history.
  3. NotebookLM — two notebooks index the Google Docs and give Claude a semantic query interface over session history via MCP.
  4. Google Colab + Gemini — a pipeline script that processes raw transcripts: archives them, summarizes via Gemini, extracts topic tags, parses directives, and writes to the Google Docs.

After each session, you export the transcript, run the Colab pipeline, then run a local refresh script to sync NotebookLM. The next session, Claude picks up where you left off.

See docs/system-overview.md for full architecture details.


Repository contents

claude-persistent-memory/
├── README.md                        ← this file; setup guide
├── docs/
│   ├── system-overview.md           ← full architecture reference
│   ├── directive-system.md          ← %%DIRECTIVE: syntax and conventions
│   └── README.md                    ← the README that goes into your Claude Project store
└── scripts/
    ├── gemini_summarizer.ipynb      ← Colab pipeline notebook
    └── notebooklm_refresh.py        ← local post-processing script

Prerequisites

  • A Claude account (free tier or subscription)
  • A Google account with access to Google Drive, Google Colab, and NotebookLM
  • The NotebookLM MCP server installed and connected in your Claude settings
  • The AI Chat Exporter browser extension for exporting session transcripts
  • Python 3.x with notebooklm-py installed locally (pip install notebooklm-py)

Setup

1. Create the Google Drive folder structure

In your Google Drive (My Drive), create the following:

My Drive/
├── claude_logs/
│   └── log_archive/
└── Colab Notebooks/

2. Create the five Google Docs

The pipeline writes session history to five Google Docs inside claude_logs/. Create them manually now so they are ready to add as NotebookLM sources in the next step.

In Google Drive, create the following blank Google Docs inside the claude_logs/ folder, using exactly these names:

  • Claude Session Summaries
  • Claude Raw Session Transcripts
  • Claude Session Index
  • Claude Latest Session
  • Claude Session Notes

The pipeline will append to these docs each time it runs. You do not need to add any content to them now.

3. Add the Colab notebook to Drive

Upload scripts/gemini_summarizer.ipynb to your Colab Notebooks/ folder in Drive, or open it directly in Colab from the repository and save a copy to Drive.

4. Create two NotebookLM notebooks

Go to notebooklm.google.com and create two notebooks:

  • Session Summaries — this is the primary notebook Claude queries at every session start
  • Raw Session Transcripts — used only for deep queries when summaries are insufficient

Note the ID of each notebook from its URL: https://notebooklm.google.com/notebook/<NOTEBOOK_ID>

5. Add Google Docs as sources in NotebookLM

Add the Google Docs you created in Step 2 as live sources in each notebook:

Session Summaries notebook — add four docs:

  • Claude Session Summaries
  • Claude Session Index
  • Claude Latest Session
  • Claude Session Notes

Raw Session Transcripts notebook — add one doc:

  • Claude Raw Session Transcripts

To add a Google Doc as a source: open the notebook → Add source → Google Drive → select the doc.

6. Configure the scripts

scripts/notebooklm_refresh.py — replace the two placeholder notebook IDs in the NOTEBOOKS config block with your actual IDs from Step 4:

NOTEBOOKS = [
    {"name": "Session Summaries",       "id": "YOUR_SESSION_SUMMARIES_NOTEBOOK_ID"},
    {"name": "Raw Session Transcripts", "id": "YOUR_RAW_TRANSCRIPTS_NOTEBOOK_ID"},
]

docs/claude-project-readme.md — replace the two placeholder notebook IDs:

- **Notebook ID:** `YOUR_SESSION_SUMMARIES_NOTEBOOK_ID`
- **Notebook ID:** `YOUR_RAW_TRANSCRIPTS_NOTEBOOK_ID`

7. Create a Claude Project and populate the project store

  1. In Claude, create a new Project.

  2. In the Project's system prompt, paste the following instruction (replacing the placeholder with your actual Session Summaries notebook ID):

    Before producing any response in this project, you must first query the Session Summaries NotebookLM notebook (ID: YOUR_SESSION_SUMMARIES_NOTEBOOK_ID) for the most recent session context. Use the query: "What does the Claude Latest Session document say? Return the filename, timestamp, tags, and summary." Do not respond to the user's opening statement until this query has completed and its result is in your context. If the MCP tool is unavailable, stop and inform the user rather than proceeding without context. This applies to every session, including sessions where the opening statement appears self-contained or simple.

  3. Upload the following files from this repository to the Project's file store. The filenames shown here are for reference — Claude identifies documents by content, not filename, so the names used when uploading don't need to match exactly:

    • docs/claude-project-readme.md
    • docs/system-overview.md
    • docs/directive-system.md

8. Authenticate the refresh script

Run the one-time authentication step for notebooklm-py:

notebooklm login

This stores credentials locally. Subsequent runs of notebooklm_refresh.py will reuse them.


Session workflow

Once set up, the workflow for each session is:

  1. Start a session in your Claude Project. Claude will query NotebookLM for prior context automatically.
  2. Work normally. Claude can emit %%DIRECTIVE: lines to persist notes and preferences across sessions (see docs/directive-system.md).
  3. Export the transcript using the AI Chat Exporter browser extension. Save the .md file to claude_logs/ in Drive.
  4. Run the Colab pipeline (gemini_summarizer.ipynb). It will archive the transcript, summarize it, and write to all five Google Docs.
  5. Run the refresh script locally:
    python notebooklm_refresh.py
    This syncs NotebookLM with the updated Google Docs. The next session will pick up the new context.

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors