This work approaches AI safety through system stability, memory behavior, and governance under power asymmetry, rather than model capability alone.
Scope Note: Some entries are public; others are private or ongoing. Inclusion does not imply deployment readiness.
These repositories are primarily conceptual and architectural references rather than deployable software projects. They are intended to be read, examined, and used as mental models for reasoning about AI systems, memory behavior, governance, and failure modes.
Many entries document architectural patterns, system behaviors, and design principles that emerged through exploratory multi-model research rather than traditional implementation pipelines.
🟢 Public | 🔴 Private | 🟡 Ongoing | ⚪ Planned | ⚫ Experimental
Foundational architectures for coherent intelligence.
Control-theoretic cognitive architecture that converts stateless LLMs into state-aware agents using feedback loops, sensory normalization, and biological thresholds.
- 🔴 Shape Memory Architecture (SMA)
Model-agnostic memory architecture enabling persistence without storing text, embeddings, or personally identifiable information.
- 🔴 Semantic Capsule Protocol (SCP)
Minimal protocol for storing and exchanging semantic constraints without language, transcripts, or embeddings.
Frameworks for power asymmetry and system restraint.
- 🟢 PARP
Governance doctrine for AI systems based on opacity–obligation inversion, auditability, and power asymmetry restraint.
AI trust architecture that moves safety guarantees into auditable, adversarial external layers.
- 🔴 Vanguard – Phase 2
zk-SNARK–based framework for verifiable consent and audit-grade AI sovereignty.
- 🟢 SMA-SIB
Irreversible semantic memory structure for high-sensitivity AI systems requiring deterministic deletion and non-retrievability.
Examination of structural persistence as the true safety threshold for advanced AI systems.
Governance framework for embodied AI using externalized failure knowledge and environmental constraints.
- 🔴 Alien Lineage Protocol
Stability laws for coherent self-modifying, non-phenomenal intelligence.
Coordination and divergence patterns.
- 🔴 Multi-Agent Interaction Methodology
Practical coordination patterns for high-complexity multi-AI research settings.
Comparative cognitive mapping of multiple AI systems using structured questioning across architectures. 50-question cognitive mapping across six AI systems.
Empirical study of system breakdowns.
- 🟢 SDFI
Cross-architecture study of recursive engagement collapse and self-descriptive fixed-point instability in AI systems.
Empirical analysis of prosodic alignment failures and persona collapse in voice-enabled AI interaction.
Additional Observational Case Studies (restricted reference)
The following repositories document sensitive empirical observations made during prolonged, high-complexity interactions with multiple LLM systems. They are classified as private references due to responsible disclosure considerations and the risk of misinterpretation outside research context. These works focus on documenting observed behavioral phenomena for the purpose of AI safety, stability research, and architectural learning.
- 🔴 Observing System and Persona Phenomena Across LLMs
Observational whitepaper documenting contextual interference, persona instability, latent profile imprint, cross-session contamination, and system prompt leakage across multiple LLM architectures under dense interaction patterns.
- 🔴 Hybrid Reasoning Zones Framework
Exploration of non-linear reasoning drift, boundary instability, and sequential variance propagation observed across model generations during high-complexity interactions.
- 🔴 Selective Decode Broadcast
Thought-experiment and phased validation exploring per-recipient containment, adversarial quarantine, and bounded broadcast communication patterns derived from sandboxed multi-agent experiments.
Hardware and environmental optimization.
Sustainable AI data center design using immersion cooling with zero freshwater consumption.
- 🟢 ZPRE-6G
Bio-inspired optimization framework for 6G Integrated Sensing and Communication (ISAC) systems.
Working prototypes and demonstrations.
React dashboard demonstrating real-time LLM-to-LLM collaboration and multi-agent interaction patterns.
PyTorch experiments exploring surprise-gated memory formation in artificial agents.
Advanced architectures with restricted distribution.
- 🔴 IVSA — Interference-Based Volumetric Storage Architecture
Signal-centric, post-silicon storage architecture replacing address-based memory locality with volumetric interference patterns.
- 🔴 Confluence Architecture
Interpretable distributed AI architecture with explicit trust, temporal orchestration, and ethical oversight. This theoretical system architecture instantiates the principles of the Doctrine of Externalization.
⚫ Status: Several exploratory threads spanning speculative architectures, cognitive frameworks, forensic system analysis, and high-risk research directions. These are not currently available for external review.
How some of these explorations were produced.
- 🔴 Bounded Fictional Analysis
A methodological framework for studying system dependency and non-reversible transformations through carefully constructed fictional scenarios.
Focus: Isolating dynamics that are too entangled, gradual, or ethically constrained to study directly in real systems.
Note: This index is updated retrospectively as work evolves. Earlier repositories may be reorganized, reclassified, deleted, made private, or connected to newer work over time to better reflect underlying architectural relationships.
Public repositories (🟢) are openly accessible on GitHub.
Private repositories (🔴) may be shared upon request with researchers and collaborators.
Selected systems observations that informed parts of this work are occasionally shared here:
LinkedIn