Generate a proficiency badge for your Claude Code usage, aligned with the 5 Claude Certified Architect exam domains.
Install · Usage · Embed · AI Grading · Scoring · Privacy · Localization · English · 中文 · Español · Français · 日本語 · 한국어
Analyzes your Claude Code session transcripts locally with a rule-based engine, scoring usage patterns across 5 domains. Optionally adds AI-powered grading across 5 outcome-focused domains using Claude's /insights data.
| Domain | Weight | What it measures |
|---|---|---|
| CC Mastery | 20% | CLAUDE.md, hooks, plugins, plan mode, skills, rules |
| Tool & MCP | 20% | Tool chains, MCP servers, LSP, selective edits |
| Agentic | 20% | Subagents, parallel execution, worktrees, task management |
| Prompt Craft | 20% | Structured prompts, code blocks, error traces, refinement |
| Context Mgmt | 20% | Cross-session memory, CLAUDE.md updates, sustained projects |
Also shows 8 feature mini-bars (Hooks, Plugins, Skills, MCP, Agents, Plan, Memory, Rules) as a heatmap row.
Click to see animated version
Click to see animated version
Disclaimer: This is an unofficial usage estimate, not an actual Anthropic certification score. Not affiliated with or endorsed by Anthropic.
npm install -g cc-proficiency
cc-proficiency initinit will:
- Detect your GitHub username (via
ghCLI) - Inject a Stop hook into
~/.claude/settings.json - Create a private GitHub Gist for your badge (if
ghis authenticated)
If gh is not installed or not authenticated, the badge is saved locally to ~/.cc-proficiency/cc-proficiency.svg.
$ cc-proficiency init
Initializing cc-proficiency...
GitHub user: @yourname
Creating private Gist...
Gist created: a1b2c3d4e5f6
Add to your README:

✓ Configuration saved to /home/you/.cc-proficiency
✓ Hook injected into ~/.claude/settings.json
Run 'cc-proficiency analyze' to compute your first scores.$ cc-proficiency analyze --full
Running full analysis...
Claude Code Proficiency — @yourname
────────────────────────────────────────
CC Mastery ███████████████░░░░░ 77 ●
Tool & MCP ███████████████████░ 96 ◐
Agentic ██████████████░░░░░░ 69 ◐
Prompt Craft ████████████████░░░░ 81 ◐
Context Mgmt ████████████████████ 100 ●
────────────────────────────────────────
Hooks Edit (1411x), Bash (928x), Write (542x) +5
Skills dev-buddy-once (5x), chatroom (2x) +5
Tools Read 2046 · Bash 1045 · Write 379 · Edit 367 (+12 more)
────────────────────────────────────────
139 sessions · 4 projects$ cc-proficiency explain
Claude Code Proficiency — @yourname
Strengths:
Context Mgmt 100/100
Tool & MCP 96/100
Prompt Craft 81/100
CC Mastery 77/100
Agentic 69/100
Areas to Improve:
Agentic (69/100)
→ Try more CC features: subagents, MCP servers, skills,
plan mode, worktrees
CC Mastery (77/100)
→ Enhance CLAUDE.md with imports, add hooks with matchers,
create rules files
Feature Usage:
Hooks: Edit (1411x), Bash (928x), Write (542x) +5
Skills: dev-buddy-once (5x), chatroom (2x) +5
Tools: Read (2046), Bash (1045), Write (379) +12 more
Flags: ✓ Plan ✓ Memory ✗ Rules
139 sessions · 4 projects# Save badge locally
$ cc-proficiency badge --output my-badge.svg
Badge written to my-badge.svg
# Or push directly to your Gist
$ cc-proficiency push
✓ Badge pushed to Gist
https://gist.githubusercontent.com/yourname/a1b2c3d4e5f6/raw/cc-proficiency.svgAfter init, a Stop hook runs automatically after every Claude Code session:
You use Claude Code normally
→ Session ends
→ Hook queues the session (<1s, invisible to you)
→ Background process analyzes + updates your badge
→ Your README badge reflects your latest scores
No manual steps needed after setup.
If you don't have gh installed or prefer local-only mode:
$ cc-proficiency init
⚠ GitHub CLI not authenticated.
Badge will be saved locally to: ~/.cc-proficiency/cc-proficiency.svg
To enable auto-upload: gh auth login && cc-proficiency init
$ cc-proficiency analyze --full
$ cc-proficiency badge --output badge.svgAfter running cc-proficiency init, add this to your README:
<!-- Static badge -->

<!-- Animated badge (click to view) -->
[Click to see animated version](https://gist.githubusercontent.com/<username>/<gist-id>/raw/cc-proficiency-animated.svg)Both badges update automatically after each Claude Code session (via the Stop hook).
The badge adapts based on how much data is available:
| Phase | Sessions | What's shown |
|---|---|---|
| Calibrating | 0–2 | Setup checklist + progress toward first scoring |
| Early Results | 3–9 | 5 domain bars + 8 feature mini-bars (low-confidence indicators ○) |
| Full Badge | 10+ | Full domain bars, feature heatmap, confidence dots (● ◐ ○) |
The Gamification Guide covers:
- First day through expert-level progression path
- Tips for each of the 5 domains
- How to unlock all 18 achievements (including 3 AI achievements)
- What drives each feature mini-bar from 0 to 100
- Streak system and leaderboard
In addition to the rule-based engine, cc-proficiency offers AI-powered grading that evaluates your usage patterns across 5 outcome-focused domains using Claude's /insights data (facets + session metadata).
| Domain | What it measures |
|---|---|
| Goal Achievement | Goal clarity, achievement rate, complexity progression, session purposefulness |
| Collaboration Quality | Friction recovery, direction clarity, feedback quality, outcome satisfaction |
| Workflow Mastery | Session strategy diversity, output-to-effort ratio, multi-session coordination |
| Growth & Learning | Friction trajectory, outcome trajectory, capability expansion, resilience |
| Verification & Quality | Outcome reliability, error handling, iterative refinement, course-correction |
Each domain is scored 0–100% with levels: Novice (<34%), Proficient (34–66%), Expert (67%+).
- Reads
/insightsfacets and session-meta from~/.claude/usage-data/ - Precomputes stats locally (outcome rates, friction trends, tool distributions)
- Sends stats + rubric to
claude -p(via stdin — no data leaves your machine except to Claude API) - AI returns only per-criterion scores (1–5); totals, levels, and achievements are computed locally
- Renders an animated SVG badge with "AI Graded" indicator
The grading rubric is stored as editable markdown files in docs/ai-grading/ — you can review and customize the criteria.
# Enable AI grading
cc-proficiency config aiGrading true
# Run AI grading (requires Claude CLI ≥ 2.1.0)
cc-proficiency ai-grade # uses sonnet by default
cc-proficiency ai-grade --model opus # use a specific model
cc-proficiency ai-grade --full # force re-grade (ignore cache)Results are cached based on evidence hash — re-running without new data returns the cached result instantly.
When aiGrading is enabled, the process command automatically triggers AI grading weekly (after releasing the queue lock). No manual steps needed.
| Phase | Facets | Behavior |
|---|---|---|
| Insufficient | <10 | Refuses to grade — not enough data |
| Early Assessment | 10–30 | Grades with a warning note on the badge |
| Full | 30+ | Standard AI grading |
Three achievements unlock based on your /insights data (deterministic, not AI judgment):
| Achievement | Requirement |
|---|---|
| Goal Crusher | ≥90% achievement rate across 20+ sessions |
| Recovery Artist | Recover from friction with satisfied outcome 5+ times |
| Prompt Evolution | Measurable improvement in prompt quality over time |
AI grading sends precomputed statistics to Claude, not raw transcripts. Project paths are sanitized to slugs. The report.html narrative is excluded to avoid AI grading AI-generated text.
cc-proficiency uses a pattern-matching rule engine with ~55 rules across 5 domains instead of counting tool calls. Each rule detects a specific behavior pattern and awards points by tier:
| Tier | Points | Example Rule |
|---|---|---|
| Beginner | 5 pts | Has global CLAUDE.md |
| Intermediate | 10–15 pts | Investigation chain: Grep → Read → Edit |
| Advanced | 15–25 pts | Parallel agents with different subagent types |
| Anti-pattern | -5 to -10 pts | 5+ parallel tools with >50% error rate |
| Domain | What it measures |
|---|---|
| CC Mastery | CLAUDE.md structure, hooks with matchers, plugins, plan mode, skills, rules files |
| Tool & MCP | Investigation chains, Read-before-Edit, tool variety, MCP server usage, LSP, selective edits |
| Agentic | Subagent type variety, parallel agents, background runs, worktrees, task management |
| Prompt Craft | Structured requests, code blocks, error traces, file references, iterative refinement |
| Context Mgmt | Active memory files, CLAUDE.md updates, sustained projects, session depth |
Below the domain bars, a heatmap row shows depth per feature:
Hooks · Plugins · Skills · MCP · Agents · Plan · Memory · Rules
Each mini-bar uses depth-based scoring with logarithmic curves that reflect actual usage, not just whether you've tried a feature once. Having hooks configured gets you ~30; firing them across hundreds of sessions gets you closer to 100. See the Gamification Guide for details.
Scores are not raw sums. Each domain has capped buckets:
| Bucket | Max Points | Source |
|---|---|---|
| Config | 25 pts | Config-based rules (CLAUDE.md, hooks, plugins; available immediately) |
| Behavior | 75 pts | Behavior-based rules (transcript patterns; grow over time) |
| Penalty | -15 pts max | Anti-pattern deductions |
This means:
- Fresh installs can score up to ~25 raw config points per domain (boosted to ~50 during calibration via 2.0x scaling)
- After calibration (10+ sessions) config alone caps at ~25 per domain; transcript evidence drives the rest
- Anti-patterns are capped, so a few bad sessions don't destroy your score
Config evidence is weighted more heavily during calibration, less as transcripts accumulate:
| Phase | Sessions | Config scale | Behavior scale |
|---|---|---|---|
| Calibrating | 0–2 | 2.0× | 0.8× |
| Early | 3–9 | 1.5× | 1.0× |
| Full | 10+ | 1.0× | 1.15× |
- Rules fire per-session with caps; repeating the same tool 100x doesn't help
- Anti-pattern rules deduct points for bad habits (shotgun parallel calls, unstructured walls of text)
- Each rule has
maxPerSession; investigation chains cap at 3 per session - Config scores are capped at 25, so you can't max a domain by just installing plugins
| Concern | How it's handled |
|---|---|
| Data location | All analysis happens locally on your machine |
| What's stored | Only aggregate counts, ratios, and boolean flags (no file paths, code, or prompts) |
| Gist visibility | Private by default (secret URL, not listed on your profile) |
| Offline mode | Works fully offline without gh CLI (local-only mode) |
| CI/CD | Non-interactive sessions are automatically detected and excluded |
cc-proficiency supports 6 languages: English, 中文, Español, Français, 日本語, 한국어.
Your locale is auto-detected from system environment on init. To change it:
cc-proficiency config locale zh-CN # Chinese
cc-proficiency config locale es # Spanish
cc-proficiency config locale fr # French
cc-proficiency config locale ja # Japanese
cc-proficiency config locale ko # Korean
cc-proficiency config locale en # English (default)SVG badges automatically display in the viewer's preferred language using SVG <switch> elements with systemLanguage attributes. All 6 languages are embedded in a single SVG file -- no need to generate separate badges per locale.
To add a new language, copy src/i18n/locales/en.ts to src/i18n/locales/<code>.ts, translate all strings, and register the locale in src/i18n/index.ts.
| Command | Description |
|---|---|
cc-proficiency init |
Set up hooks and Gist |
cc-proficiency analyze [--full] |
Parse sessions and compute scores |
cc-proficiency process |
Process queued sessions from hook |
cc-proficiency badge [--output <file>] |
Generate SVG badge |
cc-proficiency push |
Upload badge to Gist |
cc-proficiency explain |
Show score drivers and tips |
cc-proficiency achievements |
View achievement progress |
cc-proficiency status |
Show hook activity, queue, and config |
cc-proficiency config [key] [value] |
View/set configuration |
cc-proficiency ai-grade [--model m] [--full] |
AI-powered proficiency grading |
cc-proficiency share [--remove] |
Join or leave the community leaderboard |
cc-proficiency leaderboard |
View community rankings |
cc-proficiency update |
Update to the latest version |
cc-proficiency uninstall |
Remove hooks and clean up |
Opt into the community leaderboard to compare your proficiency with other Claude Code users:
# Join the leaderboard (creates a separate public profile)
$ cc-proficiency share
# View rankings
$ cc-proficiency leaderboard
# Leave the leaderboard
$ cc-proficiency share --removeYour private data (session details, project names, file paths) is never shared. Only scores, streak, and achievement count are public. See the wiki for full documentation.
Stop hook fires (< 1s)
→ Writes session path to ~/.cc-proficiency/queue.jsonl
→ Spawns `cc-proficiency process` as detached child
cc-proficiency process
→ Acquires queue.lock (stale >60s → break)
→ Deduplicates by session_id
→ Parses transcripts (streaming JSONL, per-line error handling)
→ Extracts signals → computes scores → renders SVG
→ Pushes to Gist (if configured) or saves locally
→ Atomically rotates queue
Contributions welcome! Please open an issue first to discuss what you'd like to change.
git clone https://github.com/Z-M-Huang/cc-proficiency.git
cd cc-proficiency
npm install
npm test # 433 tests
npm run build # compile to dist/
npm run typecheck # tsc --noEmit
npm run lint # eslint
npm run check # typecheck + lint + testBuilt with Claude Code. Not affiliated with or endorsed by Anthropic.