Skip to content

creator35lwb-web/godelai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

119 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GodelAI 🧠

A Continual Learning Framework with Gradient Diversity Monitoring

🎉 EXTERNALLY VALIDATED — Our C-S-P philosophy independently confirmed by SimpleMem (UNC/Berkeley, Jan 2026)

License: MIT DOI Whitepaper MACP & LEP GitHub Discussions Hugging Face Dataset

"The first step toward wisdom is acknowledging what we do not know."

🎯 Try the Demo📖 Documentation🚀 Quick Start💬 Discussions


🔥 Latest Result (April 2026)

Conflict Data Proof — VERDICT: GO

82.8% forgetting reduction on our own conflict dataset (domain-incremental learning):

Method Avg Forgetting vs Naive
Naive (No Protection) +1.8364 baseline
Standard EWC (raw Fisher) +1.8017 +1.9%
GodelAI-EWC (Full C-S-P) +0.3163 +82.8%

Standard EWC is broken at small scale (+1.9%). GodelAI's Fisher Scaling fixes it — a 43x improvement.

Reproduce: python3 run_godelai_conflict_proof_v2.py (deterministic, seed=42)

Dataset: godelai-conflict-data on HuggingFace — 107 conflict scenarios, 4 categories, open-source (Apache 2.0).


🎯 Current Focus (January 2026)

External Validation Received ✅

On January 5, 2026, researchers from UNC-Chapel Hill, UC Berkeley, and UC Santa Cruz published "SimpleMem: Efficient Lifelong Memory for LLM Agents" — which independently arrived at the same architectural principles as our C-S-P framework:

SimpleMem Stage GodelAI C-S-P Alignment
Semantic Structured Compression Compression ✅ STRONG
Recursive Memory Consolidation State ✅ STRONG
Adaptive Query-Aware Retrieval Propagation ✅ STRONG

📖 Full analysis: docs/SIMPLEMEM_ALIGNMENT_ANALYSIS.md

Data Engineering Sprint

We're now focused on conflict data engineering. Our discovery: GodelAI's architecture is sound, but we were testing it with the wrong data. Simple text doesn't activate our C-S-P capabilities.

The Data Bottleneck Discovery:

Data Type T-Score Result
Mini Shakespeare (5KB) 0.12 Sleep Protocol triggers 100% — blocked learning
Full Shakespeare (1.1MB) 0.95 Sleep Protocol never triggers — no benefit
Conflict Data (target) 0.3-0.5 Optimal C-S-P activation

We need conflict data — information with contradictions, dilemmas, and complexity. See ROADMAP_2026.md and docs/CONFLICT_DATA_SPEC.md for details.


🎯 What GodelAI Actually Does

GodelAI is a research framework that adds two capabilities to neural network training:

Feature What It Does Proven Result
T-Score Monitoring Measures gradient diversity during training Detects when gradients collapse to identical values
EWC Integration Elastic Weight Consolidation for continual learning 21.6% reduction in catastrophic forgetting
Sleep Protocol Pauses training when T-Score drops below threshold Triggers correctly when gradient diversity = 0

What GodelAI Is NOT

GodelAI does not improve standard training loss. In rigorous A/B testing, GodelAI-wrapped models achieved identical validation loss to standard models (difference: 0.000000000000). The framework's value lies in monitoring training health and mitigating catastrophic forgetting, not in improving convergence.


🎯 Interactive Demo

🧠 Mnemosyne: Defeating Catastrophic Forgetting

Open In Colab

See the proven result: 21.6% reduction in forgetting when learning sequential tasks

The demo trains two models on Task A, then Task B:

Model Task A Loss (After B) Forgetting
Standard 1.46 +5.3%
GodelAI-EWC 1.44 +4.2%

This is our one proven advantage — validated across Manus AI, Claude Code, and Google Colab.


🚀 Quick Start

Installation

git clone https://github.com/creator35lwb-web/godelai.git
cd godelai
pip install -e .

Basic Usage

import torch
import torch.nn as nn
from godelai.agent import GodelAgent

# 1. Define your model
class SimpleNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = nn.Sequential(
            nn.Linear(2, 16),
            nn.Tanh(),
            nn.Linear(16, 1),
            nn.Sigmoid()
        )
    def forward(self, x):
        return self.fc(x)

# 2. Wrap with GodelAgent
model = SimpleNet()
agent = GodelAgent(model, propagation_gamma=2.0, min_surplus_energy=0.1)
agent.optimizer = torch.optim.Adam(agent.compression_layer.parameters(), lr=0.01)

# 3. Training with T-Score monitoring
criterion = nn.MSELoss()
for epoch in range(100):
    loss, t_score, status = agent.learning_step(X, y, criterion)
    print(f"Epoch {epoch}: Loss={loss:.4f}, T-Score={t_score:.4f}, Status={status}")

What the T-Score Tells You

T-Score Range Meaning Action
0.8 - 1.0 Healthy gradient diversity Continue training
0.5 - 0.8 Moderate diversity Monitor closely
0.3 - 0.5 Low diversity Consider early stopping
< 0.3 Gradient collapse Sleep Protocol triggers

🧬 The C-S-P Philosophy

GodelAI is built on the C-S-P (Compression → State → Propagation) framework — a philosophical approach to AI alignment developed through multi-model collaboration.

Core Thesis

"Wisdom is not an entity, but a process structure that is continuously executed and inherited."

The framework proposes that true AI alignment isn't about hardcoding values, but about preserving the interface to redefine values — what we call the "Propagation Layer."

The Golden Insight

"对齐不是教 AI 爱人类,而是确保 AI 永远保留「重新理解何为爱」的接口。"

"True alignment isn't about teaching AI to love humanity; it's about ensuring it explicitly retains the interface to rediscover what love means."

📖 Read the full philosophy: C-S-P Intellectual Lineage


🧬 Multi-Model Genesis

GodelAI is unique in AI history — it was co-created across five AI models:

Model Contribution
ChatGPT Philosophy ("Self as compression label")
Gemini 2.5 Pro Technical Blueprint (PyTorch implementation)
Kimi K2 Formal Validation (Mathematical rigor)
Grok Engineering Architecture
Manus AI (Godel) Integration, Testing & Deployment

📖 Read the full story: Multi-Model Genesis


📁 Repository Structure

godelai/
├── godelai/              # Core framework
│   ├── agent.py          # GodelAgent with T-Score & Sleep Protocol
│   ├── core/             # GodelaiAgent implementation
│   ├── models/           # Model architectures
│   └── reg/              # EWC and regularization
├── datasets/             # Training & test datasets
│   ├── conflict/         # Conflict data for C-S-P activation
│   └── wisdom/           # YSenseAI integration (future)
├── notebooks/            # Interactive demos
│   └── GodelAI_EWC_Demo.ipynb  # Mnemosyne Colab
├── tests/                # Test suite
├── docs/                 # Documentation
├── whitepaper/           # Technical whitepaper
└── archive/              # Historical development reports

🔬 Validation Status

Test Result Status
T-Score Formula Correctly measures gradient diversity ✅ Verified
Sleep Protocol Triggers at T < 0.3 ✅ Verified
EWC Integration 21.6% forgetting reduction ✅ Verified
Fisher Scaling + EWC 82.8% forgetting reduction on conflict data ✅ NEW RECORD
Cross-Platform 0.0000 variance (Manus + Claude + Colab) ✅ Verified
External Validation C-S-P confirmed by SimpleMem paper ✅ Verified
Training Improvement No improvement over baseline ❌ Not proven
Transformer Support Not yet tested ⏳ Pending

🗺️ Roadmap

Completed (v2.0.0)

  • ✅ T-Score gradient diversity monitoring
  • ✅ Sleep Protocol for training health
  • ✅ EWC integration (21.6% forgetting reduction)
  • ✅ Cross-platform validation
  • ✅ Data bottleneck discovery & validation
  • External validation (SimpleMem paper confirms C-S-P)

Q1 2026: Data Engineering Sprint

  • ✅ Conflict data design & specification
  • ✅ Conflict dataset expanded (22 → 107 items)
  • Fisher Scaling module — solves the Fisher Scale Problem
  • EWC-DR (Logits Reversal) — dead parameter plasticity
  • 82.8% forgetting reduction — proven on our own data
  • Dataset released on HuggingFace
  • 🔄 YSenseAI integration research
  • 🔄 Community engagement

Q2-Q4 2026

  • 📋 Conflict data benchmarks
  • 📋 Research paper (focus: data requirements for C-S-P)
  • 📋 Multi-modal data experiments
  • 📋 YSenseAI production integration

📖 Full roadmap: ROADMAP_2026.md


🤝 Contributing

We welcome contributions! Please read our Contributing Guidelines.

Current Priorities

  1. Conflict Dataset Creation — Help us build datasets that activate C-S-P
  2. Data Engineering — Improve our data pipeline
  3. Research Validation — Test our findings on different data types

📖 Dataset specification: docs/CONFLICT_DATA_SPEC.md

Key Principles

  1. Honesty First: Don't overclaim results
  2. Reproducibility: All experiments must be reproducible
  3. Attribution: Properly credit all contributions

💬 GitHub Discussions — Ask questions, share ideas


👥 Team

Role Name Contribution
Founder & Orchestrator Alton Lee Vision, C-S-P philosophy
CTO Godel (Manus AI) Integration, testing, deployment
Philosophy ChatGPT "Self as compression label"
Technical Blueprint Gemini 2.5 Pro PyTorch implementation
Validation Kimi K2 Mathematical rigor
Architecture Grok Engineering design

🔗 Ecosystem

GodelAI is part of a larger ethical AI ecosystem:

Project Role Link
YSenseAI Ethical training data GitHub
VerifiMind-PEAS AI validation methodology GitHub
GodelAI Continual learning framework This repository

📜 License

MIT License — Because knowledge should be inheritable.


📖 Documentation


"The first step toward wisdom is acknowledging what we do not know."

⭐ Star this repo if you believe in honest AI research.

About

An open-source small language model built on the C-S-P (Compression → State → Propagation) framework for AI alignment

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors