Skip to content

ScottLL/NeuroSync

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NeuroSync

Real-time EEG + AI Brain Prediction Platform

NeuroSync combines Meta's TRIBE v2 fMRI brain encoding model with Neurosity consumer EEG headband to create a neural content analysis tool.

TRIBE v2 predicts what the brain should do in response to any video/audio content. Neurosity measures what the brain actually does in real-time. NeuroSync correlates both to score content engagement at the neural level.

How It Works

Video/Audio Content
    |
    v
TRIBE v2 (offline)                    Neurosity EEG (real-time)
Predicts brain activation             Measures brain activity
across 180 brain regions              8 channels, 256Hz
per second of content                 calm, focus, band power
    |                                     |
    v                                     v
    +------ NeuroSync Dashboard ----------+
            Per-second correlation
            Predicted vs Measured engagement
            CSV export for analysis

Architecture

TribeV2/
├── server/          # Python FastAPI backend
│   ├── app.py       # Server + video streaming
│   ├── tribe_worker.py  # Background TRIBE v2 processing
│   ├── routers/     # API endpoints (videos, sessions, export)
│   └── db.py        # SQLite storage
├── BCI/             # React frontend
│   └── src/
│       ├── pages/
│       │   ├── VideoLibrary.jsx    # Browse ready videos
│       │   ├── Processing.jsx      # Upload/queue management
│       │   ├── DataCollection.jsx  # Watch video + record EEG
│       │   └── Dashboard.jsx       # Correlation analysis
│       └── hooks/
│           ├── useNeurosityRecorder.js  # Multi-stream EEG capture
│           └── useBandPower.js          # FFT band power from raw EEG
└── tribev2/         # Meta's TRIBE v2 model (submodule)

Features

  • Video Processing Pipeline: Upload videos or paste YouTube URLs. TRIBE v2 predicts per-second brain activation across 180 HCP brain regions.
  • Real-time EEG Recording: Stream calm, focus, and raw EEG from Neurosity Crown/Notion. Band powers (delta/theta/alpha/beta/gamma) computed via FFT from raw data.
  • Live Visualization: Raw EEG waveforms, circular calm/focus gauges, band power bars, and TRIBE engagement timeline -- all updating in real-time during video playback.
  • Correlation Dashboard: Overlay TRIBE predicted engagement with measured EEG focus/calm. Pearson correlation computed automatically. Per-second data table with video sync.
  • Processing Queue: Batch-add YouTube URLs, track progress with percentage bars, process in background.
  • CSV Export: Download per-second data joining TRIBE predictions with EEG measurements, with configurable hemodynamic delay offset.

Prerequisites

  • Python 3.11+
  • Node.js 18+
  • Neurosity Crown or Notion EEG headband with account
  • HuggingFace account with access to LLaMA 3.2 (required for text feature extraction)
  • ~10GB disk space for model weights and caches

Setup

1. Clone

git clone https://github.com/ScottLL/NeuroSync.git
cd NeuroSync

2. Python Environment

cd tribev2
python -m venv .venv
source .venv/bin/activate
pip install -e ".[plotting]"
pip install fastapi uvicorn python-multipart aiofiles yt-dlp

Set your HuggingFace token:

export HF_TOKEN="your_token_here"

3. React Frontend

cd BCI
npm install

4. Run

Start both services:

# Terminal 1: API server
cd server && ../tribev2/.venv/bin/python -m uvicorn app:app --port 8000

# Terminal 2: React dev server
cd BCI && npm run dev

Open http://localhost:5173, log in with your Neurosity device credentials.

Usage

  1. Processing page: Paste YouTube URLs or upload video files. Videos are processed through TRIBE v2 in the background (takes 30-60 min per video on CPU).
  2. Video Library: Shows ready videos. Click "Collect EEG Data" to start a recording session.
  3. Data Collection: Watch the video while wearing the Neurosity headband. EEG data is recorded with video-time alignment.
  4. Dashboard: View the correlation between TRIBE predicted brain engagement and your measured EEG response. Export CSV for further analysis.

Science

TRIBE v2 predicts fMRI BOLD activation (spatial, 20K cortical vertices). Neurosity EEG measures scalp electrical activity (temporal, 8 channels). These are complementary signals:

TRIBE v2 Prediction Expected EEG Correlate Evidence
High visual cortex activation Alpha desync at PO3/PO4 Strong (most replicated EEG-fMRI finding)
High language network activation Frontal theta increase Moderate
Low activation (boring segment) Alpha increase, focus drops Strong
High overall engagement Beta up, alpha down Strong

The correlation is at the metric level (engagement scores), not voxel level. 8-channel consumer EEG cannot validate individual brain regions but can reliably measure overall engagement/attention states.

Data Storage

All data persists locally:

  • server/neurosync.db -- SQLite database (videos, predictions, EEG sessions)
  • server/uploads/ -- Downloaded/uploaded video files
  • tribev2/cache/ -- Cached feature extractions (reused across runs)

Tech Stack

Backend: Python, FastAPI, PyTorch, TRIBE v2 (1B param transformer), SQLite

Frontend: React 19, Vite, Neurosity SDK, Canvas API

Models: V-JEPA2 (video), LLaMA 3.2 (text), Wav2Vec-BERT (audio)

License

  • NeuroSync server and frontend code: MIT
  • TRIBE v2 model: See tribev2/LICENSE

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors