Skip to content

fred-ai-security/fred-ai-security

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 

Repository files navigation

πŸ‘‹ Hi, I’m Frederick Baffour
AI Security Assurance Engineer | LLM Red Teaming | Model Supply-Chain Security

I specialize in AI Security Assurance, focusing on how AI models are evaluated, tested, and documented before use in real environments. My work covers the full lifecycleβ€”from model intake and supply-chain verification to adversarial testing and structured reporting.

My background is in enterprise security engineering, and I apply the same discipline to AI systems: clear methodology, reproducible testing, and evidence-based conclusions.

πŸ” What I Work On

  • AI Security Assurance engineering
  • LLM red teaming (Garak, Promptfoo, manual testing)
  • Jailbreak, prompt-injection, and refusal-bypass evaluation
  • Model supply-chain integrity (hashing, SBOMs, static analysis)
  • Secure model execution and misuse analysis

🧰 Core Tools

  • Garak, Promptfoo
  • YARA, ClamAV, Sigcheck
  • Syft / Grype
  • Ollama, HuggingFace CLI

πŸ“˜ Featured Work

πŸ” AI Security Assurance Labs
End-to-end portfolio demonstrating:

  • Model intake & supply-chain verification
  • Hashing, YARA, ClamAV, SBOM workflows
  • LLM red teaming & behavioral evaluation
  • Clear, reviewer-friendly documentation

πŸ‘‰ https://github.com/fred-ai-security/ai-security-assurance-labs

🀝 Open to Roles

  • AI Security Engineer
  • LLM Red Team Engineer
  • Model Evaluation & Assurance
  • AI Systems Security

πŸ“¬ Contact
Email: fbaffour@gmail.com
LinkedIn: https://www.linkedin.com/in/frederick-baffour

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors