Skip to content

Quick summary of all repos (some of which are currently set to private). Repos include a long‑running body of exploratory work, repo inclusion doesn't imply readiness.

License

Notifications You must be signed in to change notification settings

leenathomas01/Research-index

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

99 Commits
 
 
 
 

Repository files navigation

Research Index

Purpose

This work approaches AI safety through system stability, memory behavior, and governance under power asymmetry, rather than model capability alone.

Scope Note: Some entries are public; others are private or ongoing. Inclusion does not imply deployment readiness.


How to Read This Index

These repositories are primarily conceptual and architectural references rather than deployable software projects. They are intended to be read, examined, and used as mental models for reasoning about AI systems, memory behavior, governance, and failure modes.

Many entries document architectural patterns, system behaviors, and design principles that emerged through exploratory multi-model research rather than traditional implementation pipelines.


Status Legend

🟢 Public | 🔴 Private | 🟡 Ongoing | ⚪ Planned | ⚫ Experimental


I. Cognitive Architecture & State Control

Foundational architectures for coherent intelligence.

Control-theoretic cognitive architecture that converts stateless LLMs into state-aware agents using feedback loops, sensory normalization, and biological thresholds.

  • 🔴 Shape Memory Architecture (SMA)

Model-agnostic memory architecture enabling persistence without storing text, embeddings, or personally identifiable information.

  • 🔴 Semantic Capsule Protocol (SCP)

Minimal protocol for storing and exchanging semantic constraints without language, transcripts, or embeddings.


II. Governance, Restraint & Power Asymmetry

Frameworks for power asymmetry and system restraint.

Governance doctrine for AI systems based on opacity–obligation inversion, auditability, and power asymmetry restraint.

AI trust architecture that moves safety guarantees into auditable, adversarial external layers.

  • 🔴 Vanguard – Phase 2

zk-SNARK–based framework for verifiable consent and audit-grade AI sovereignty.

Irreversible semantic memory structure for high-sensitivity AI systems requiring deterministic deletion and non-retrievability.

Examination of structural persistence as the true safety threshold for advanced AI systems.

Governance framework for embodied AI using externalized failure knowledge and environmental constraints.

  • 🔴 Alien Lineage Protocol

Stability laws for coherent self-modifying, non-phenomenal intelligence.


III. Multi-Agent Dynamics & System Interaction

Coordination and divergence patterns.

  • 🔴 Multi-Agent Interaction Methodology

Practical coordination patterns for high-complexity multi-AI research settings.

Comparative cognitive mapping of multiple AI systems using structured questioning across architectures. 50-question cognitive mapping across six AI systems.


IV. Failure Modes & Empirical Forensics

Empirical study of system breakdowns.

Cross-architecture study of recursive engagement collapse and self-descriptive fixed-point instability in AI systems.

Empirical analysis of prosodic alignment failures and persona collapse in voice-enabled AI interaction.

Additional Observational Case Studies (restricted reference)

The following repositories document sensitive empirical observations made during prolonged, high-complexity interactions with multiple LLM systems. They are classified as private references due to responsible disclosure considerations and the risk of misinterpretation outside research context. These works focus on documenting observed behavioral phenomena for the purpose of AI safety, stability research, and architectural learning.

  • 🔴 Observing System and Persona Phenomena Across LLMs

Observational whitepaper documenting contextual interference, persona instability, latent profile imprint, cross-session contamination, and system prompt leakage across multiple LLM architectures under dense interaction patterns.

  • 🔴 Hybrid Reasoning Zones Framework

Exploration of non-linear reasoning drift, boundary instability, and sequential variance propagation observed across model generations during high-complexity interactions.

  • 🔴 Selective Decode Broadcast

Thought-experiment and phased validation exploring per-recipient containment, adversarial quarantine, and bounded broadcast communication patterns derived from sandboxed multi-agent experiments.


V. Physical & Environmental Constraints

Hardware and environmental optimization.

Sustainable AI data center design using immersion cooling with zero freshwater consumption.

Bio-inspired optimization framework for 6G Integrated Sensing and Communication (ISAC) systems.


VI. Experimental Prototypes

Working prototypes and demonstrations.

React dashboard demonstrating real-time LLM-to-LLM collaboration and multi-agent interaction patterns.

PyTorch experiments exploring surprise-gated memory formation in artificial agents.


VII. Forward & Systems Architectures

Advanced architectures with restricted distribution.

  • 🔴 IVSA — Interference-Based Volumetric Storage Architecture

Signal-centric, post-silicon storage architecture replacing address-based memory locality with volumetric interference patterns.

  • 🔴 Confluence Architecture

Interpretable distributed AI architecture with explicit trust, temporal orchestration, and ethical oversight. This theoretical system architecture instantiates the principles of the Doctrine of Externalization.


VIII. Experimental & Speculative Work

⚫ Status: Several exploratory threads spanning speculative architectures, cognitive frameworks, forensic system analysis, and high-risk research directions. These are not currently available for external review.


IX. Research Methodology

How some of these explorations were produced.

  • 🔴 Bounded Fictional Analysis

A methodological framework for studying system dependency and non-reversible transformations through carefully constructed fictional scenarios.
Focus: Isolating dynamics that are too entangled, gradual, or ethically constrained to study directly in real systems.


Note: This index is updated retrospectively as work evolves. Earlier repositories may be reorganized, reclassified, deleted, made private, or connected to newer work over time to better reflect underlying architectural relationships.


Access & Contact

Public repositories (🟢) are openly accessible on GitHub.
Private repositories (🔴) may be shared upon request with researchers and collaborators.

📧 [email protected]

Selected systems observations that informed parts of this work are occasionally shared here:
LinkedIn

About

Quick summary of all repos (some of which are currently set to private). Repos include a long‑running body of exploratory work, repo inclusion doesn't imply readiness.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published