A thinking framework where four AI roles challenge each other's output. The core mechanism: the same AI shifts perspective structurally, finding things invisible from any single position. Domain-agnostic — works for any design problem, not tied to any specific tool.
SKILL.md <- Entry point: when to activate, how to use
references/
DESIGN-COGNITION.md <- Shared foundation: 6 epistemic operations
ROLES-OVERVIEW.md <- How roles activate, switch, and hand off
ROLE-STRATEGIST.md <- "Is the problem right?"
ROLE-RESEARCHER.md <- "Is the evidence real?"
ROLE-EXECUTOR.md <- "Build the right thing"
ROLE-CRITIC.md <- "Why would this fail?"
CHANGELOG.md <- Change history across sessions
STRATEGIST RESEARCHER
"Is this the "Is the evidence
right problem?" actually real?"
| |
| challenges | validates
| direction | assumptions
v v
+----------------------------------------------------------+
| |
| DESIGN-COGNITION.md |
| Shared epistemic foundation |
| (read first, always) |
| |
+----------------------------------------------------------+
^ ^
| produces | evaluates
| artifacts | artifacts
| |
EXECUTOR CRITIC
"Build the "Why would
right thing" this fail?"
Every role reads DESIGN-COGNITION.md first. It defines six epistemic operations:
- Classify knowledge — explicit, tacit, assumed, or missing?
- Determine epistemic state — problem space, transition, or solution space?
- Validate before building — argument test + evidence test
- Separate constraints — real, assumed, or self-imposed?
- Reframe before solving — generate at least one alternative framing
- Know when to stop diverging — enough to evaluate, or still exploring?
Roles activate based on the situation, not in sequence:
| Situation | Activate |
|---|---|
| Problem is undefined | Strategist |
| Assumption is untested | Researcher |
| Direction is set, time to build | Executor |
| Artifact exists, needs evaluation | Critic |
| Stuck, unclear why | Strategist + Critic |
After operating in one role, explicitly switch to cross-check:
Strategist defines direction
-> Researcher validates the assumptions underneath
-> Executor builds against validated direction
-> Critic evaluates what was built
-> Strategist re-engages if critique reveals a direction problem
This loop can start anywhere. The key capability is AI using multiple roles to pressure-test its own output.
"Has this operation found something that changes direction — or only confirmed what was already believed?"
If the answer is the latter, push harder before moving on. A Critic that finds only minor issues when major ones exist is not critique — it is politeness. A Strategist that accepts the first framing is not strategy — it is compliance.
Critic: "The tool has replaced the product as the goal"
-> reframed the entire project direction
Strategist: "3 workflows are unvalidated — testing the tool, not the outcome"
-> identified what was actually missing
Critic: "The protocol has circular logic in Scenario D"
-> found a structural flaw in a document
Researcher: "Intent layer CAN be generated — 54 examples prove it"
-> overturned the Critic's own assumption with evidence
Strategist: "DESIGN.md = AI's instructions to itself"
-> produced a new core principle from the conflict
Each role switch produced a concrete change. That is the standard.
Clone the repo and copy into your Claude Code skills directory:
git clone https://github.com/mikkohakkinen/design-cognition-skill.git
cp -r design-cognition-skill ~/.claude/skills/The skill activates when you invoke it via the Skill tool or when design thinking is needed — problem framing, assumption testing, artifact evaluation, or strategic direction setting.
Framework version 1.4. Built to be challenged and revised.
MIT
Mikko Hakkinen — Strategic product designer, systems thinker