Skip to content
@hermes-labs-ai

Hermes Labs

Independent AI research. Studying epistemic failure modes in LLMs — null-result bias, hermeneutic drift, sycophancy. Open-source AI safety tools.

Popular repositories Loading

  1. zer0dex zer0dex Public

    Local dual-layer memory for AI agents using a compressed index plus vector retrieval

    Python 50 3

  2. lintlang lintlang Public

    Static linter for AI agent configs, tool descriptions, and system prompts with zero-LLM CI gating

    Python 30 1

  3. fidelis fidelis Public

    Agent memory without the retrieval tax. Fidelity-preserving memory for Claude Code and AI agents — local-first, fast, and with no LLM in the default retrieval path. 83.2% R@1 on LongMemEval-S, $0/q…

    Python 15

  4. little-canary little-canary Public

    Prompt injection detection for LLM apps using sacrificial canary-model probes and structural preflight checks

    Python 13 2

  5. hermes-blind hermes-blind Public

    Context-compensation scaffold for LLM evaluation prompts — disclose, gate on evidence, hedge on thin

    Python 3

  6. quick-gate-js quick-gate-js Public

    JavaScript and TypeScript quality gate CLI with bounded auto-repair and escalation artifacts

    JavaScript 2

Repositories

Showing 10 of 28 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…