Releases: anhmtk/StillMe-Learning-AI-System-RAG-Foundation
StillMe Preprint v0.2 (Updated Evaluation Results)
StillMe Preprint v0.2 - Updated Evaluation Results
This release contains the updated preprint of the StillMe framework with latest evaluation results:
StillMe: A Practical Framework for Building Transparent, Validated Retrieval-Augmented Generation Systems
Key Updates:
- ✅ Updated evaluation results: 35% accuracy (20-question subset), 13.5% (full 790-question)
- ✅ Updated citation rate: 91.1% (full evaluation)
- ✅ Updated transparency score: 85.8% (full evaluation)
- ✅ All metrics now reflect current system performance
Files Included:
main.pdf- Updated preprint with latest resultsmain.tex- LaTeX source (updated)refs.bib- Bibliographyfigures/- All figures
DOI:
- DOI: https://doi.org/10.5281/zenodo.17738949
- Zenodo Record: https://zenodo.org/records/17738949
Overview:
StillMe is a transparency-first framework designed to transform commercial LLMs into fully auditable systems without any model training or labeled datasets.
This paper introduces:
- A multi-layer Validation Chain to reduce hallucination
- A continuous learning pipeline updating every 4 hours (RSS, arXiv, CrossRef, Wikipedia)
StillMe Preprint v0.1 (DOI Included)
📄 StillMe Preprint v0.1 (DOI Included)
This release contains the first preprint of the StillMe framework:
StillMe: A Practical Framework for Building Transparent, Validated Retrieval-Augmented Generation Systems
DOI: https://doi.org/10.5281/zenodo.17637315
Zenodo Record: https://zenodo.org/records/17637315
🔍 Overview
StillMe is a transparency-first framework designed to transform commercial LLMs into
fully auditable systems without any model training or labeled datasets.
This paper introduces:
- A multi-layer Validation Chain to reduce hallucination
- A continuous learning pipeline updating every 4 hours (RSS, arXiv, CrossRef, Wikipedia)
- A transparency-first RAG architecture where every answer:
- has citations
- has a validation log
- can express uncertainty when information is missing
🧪 Evaluation Summary
- Accuracy: 56% (50-question TruthfulQA subset)
- ChatGPT baseline: 52%
- Citation rate: 100%
- Transparency score: 70.6%
- Extended evaluation (634 questions):
- 99.68% citation coverage
- 70.87% transparency score
StillMe demonstrates that transparency and accuracy are not mutually exclusive.
🔗 Resources
- DOI: https://doi.org/10.5281/zenodo.17637315
- Zenodo Record: https://zenodo.org/records/17637315
- Source Code: https://github.com/anhmtk/StillMe-Learning-AI-System-RAG-Foundation
- Backend API: (if public) stillme-backend-production.up.railway.app
📎 Included in this release
main.pdf— The 13-page StillMe preprint
If you have feedback, questions, or suggestions, feel free to open an Issue or Discussion.
StillMe v0.4 - Docker Setup & Enhanced README
🎉 First Public Release
StillMe v0.4 marks our first public release - a complete Self-Evolving AI System with 100% ethical transparency and community control.
🚀 What's New
✨ Core Features
- Self-Evolving AI System: AI learns from internet daily and evolves through stages (Infant → Child → Adolescent → Adult)
- Hybrid Learning System: 70% AI auto-approval + 30% community review
- Secure Community Voting: Weighted trust voting with EthicsGuard (10 votes minimum, 70% approval threshold)
- Ethical Filtering: Complete transparency into all ethical decisions and violations
- Real-time Dashboard: Streamlit dashboard with evolution tracking, learning analytics, and community controls
🐳 Docker & Deployment
- One-Click Setup: Docker Compose configuration for easy deployment
- Quick Start Scripts:
quick-start.sh(Linux/Mac) andquick-start.ps1(Windows) - Production Ready: Multi-stage Dockerfile optimized for production
📚 Documentation & Transparency
- Comprehensive README: Complete project documentation with architecture diagrams
- Technical Roadmap: Detailed roadmap with Vector DB and Meta-Learning milestones
- Founder Story: Honest account of AI-assisted development journey
- Thought Experiment: Provocative discussion about AI self-improvement
- Architecture Diagrams: Mermaid diagrams showing system architecture and learning flow
🛡️ Security & Ethics
- EthicsGuard: Automatic ethical compliance check before content approval
- Red-Team Agent: Safety scanning for harmful content
- Weighted Trust Voting: Community votes weighted by reviewer reputation
- Complete Audit Trail: Full history of all learning decisions
📊 Technical Highlights
- Backend: FastAPI with async support
- Frontend: Streamlit dashboard
- Database: SQLite (current), Vector DB planned (v0.6)
- AI Models: DeepSeek (primary), OpenAI (backup), Ollama (local)
- Smart Routing: Intelligent routing between local and cloud AI
🗺️ Roadmap Preview
- v0.5 (In Progress): Enhanced Metrics - Accuracy and retention tracking
- v0.6 (Planned): Long-term Memory - Vector DB integration with RAG
- v0.7 (Planned): Meta-Learning 1.0 - Curriculum Learning and Self-Optimization
- v0.8 (Research): AI Self-Improvement - Exploratory research phase
- v1.0 (Planned): Full Self-Improvement Loop
🚀 Quick Start
# Clone repository
git clone https://github.com/anhmtk/StillMe---Self-Evolving-AI-System.git
cd StillMe---Self-Evolving-AI-System
# One-click setup (Linux/Mac)
chmod +x quick-start.sh
./quick-start.sh
# Or Windows PowerShell
.\quick-start.ps1
# Or manually with Docker Compose
docker-compose up -dAccess after startup:
- 📊 Dashboard: http://localhost:8501
- 🔌 API: http://localhost:8000
- 📚 API Docs: http://localhost:8000/docs
🤝 Contributing
We welcome contributions! See our Contributing Guide and join GitHub Discussions.
🙏 Acknowledgments
Built with AI assistance (Cursor AI, Grok, DeepSeek) - proving that vision + AI tools = possibility. Now we need your expertise to make StillMe great!
This is just the beginning. StillMe is a journey toward truly transparent, ethical, and community-controlled AI. Join us!
🔗 Repository: https://github.com/anhmtk/StillMe---Self-Evolving-AI-System
💬 Discussions: https://github.com/anhmtk/StillMe---Self-Evolving-AI-System/discussions