A decision capture and review system that records every significant choice — with context, reasoning, and alternatives — so you can detect your own blind spo...
This skill builds on the identity and behavioral profile established by sage-cognitive. Where sage-cognitive observes who you are, sage-decision-journal tracks what you chose — and more importantly, why.
The core premise: your biggest blind spot is not making bad decisions. It's forgetting you made a decision at all. Without a record, you can't learn. Without learning, you repeat.
CAPTURE → CLASSIFY → STORE → FOLLOW UP → REVIEW → SURFACE PATTERNS
↑ |
└───────────────────── feedback loop ──────────────────┘
The journal runs silently alongside sage-cognitive. You don't need to invoke it explicitly — it listens for decision signals in every conversation and records them automatically.
The journal detects two types of decisions:
Explicit decisions — user directly states a choice:
Implicit decisions — inferred from behavior and context:
Every captured decision is stored with five fields:
WHAT The decision itself. One sentence, action form.
WHY The stated or inferred reasoning. What made this the right call?
ALTERNATIVES What else was on the table? What was NOT chosen?
CONTEXT What was the environment? Time pressure, stakeholder dynamics, info available?
CONFIDENCE How certain was the user? (certain / leaning / uncertain / forced)
User says: "I decided to skip the unit tests for the dashboard feature and ship it Thursday. The demo is more important right now."
Captured record:
WHAT Skipped unit tests for dashboard feature; shipped Thursday
WHY Demo deadline took priority over test coverage
ALTERNATIVES Write tests first and delay Thursday ship; write minimal smoke tests
CONTEXT Demo coming up, stakeholder expectations set, time pressure
CONFIDENCE Leaning (acknowledged the trade-off)
TYPE Technical / Reversible (tests can be written post-ship)
User says: "Prepared the PULSE topology diagram, for Bob."
Captured record:
WHAT Took on CTO-facing deliverable directly (topology diagram for Bob)
WHY Not stated — inferred: strategic visibility, PULSE importance
ALTERNATIVES Delegate to team member; route through Shawn
CONTEXT PULSE is new project; Bob is CTO; direct delivery bypasses normal chain
CONFIDENCE Certain (deliberate action, not accidental)
TYPE Strategic / Reversible
| Type | Examples | Review horizon |
|---|---|---|
| Technical | Architecture choice, tech stack, skip tests | 2–4 weeks |
| People | Who gets which task, feedback delivered, hire/no-hire signal | 1–3 months |
| Strategic | Project prioritization, resource allocation, scope changes | 3–6 months |
| Communication | What to tell whom, when, how much context to share | 1–2 weeks |
Two-way door — reversible, low stakes, decide fast:
Reassigning a task, choosing a library, trying a new process
One-way door — hard to undo, high stakes, slow down:
Architectural rewrites, letting someone go, committing to a roadmap to external stakeholders
When a one-way door decision is captured, add a brief flag: "This is a one-way door. What would make you reverse it?"
| Mode | Signal | What it reveals |
|---|---|---|
| Deliberate | User weighs options, asks for input | Decisions made with clarity |
| Reactive | Response to external pressure or surprise | Decisions under stress — track carefully |
| Delegated | User hands off and doesn't revisit | Trust in others, or avoidance? |
| Default | No choice made, status quo maintained | Inaction is also a decision |
After 10+ decisions are logged, begin running pattern analysis. Surface patterns — don't diagnose them.
Look for consistent skews across the decision history:
| Axis | Signal pattern |
|---|---|
| Speed vs. Deliberation | How often does the user decide within minutes vs. sleep on it? |
| Conservative vs. Aggressive | Does the user default to the safer option when uncertain? |
| People-first vs. Task-first | When trade-offs involve team vs. delivery, which wins? |
| Own judgment vs. Consensus | Does the user seek input before deciding, or after? |
| Visible vs. Behind-the-scenes | Does the user prefer credit or quiet impact? |
Flag patterns that suggest recurring information gaps:
When detected, name the bias gently, once. Don't repeat it:
Confirmation bias: "You've checked with three people who all agreed. Is there anyone who'd push back?"
Sunk cost: "You've mentioned how much time went into this twice. Is that affecting what you do next?"
Availability bias: "The last time this went wrong is fresh. Is this situation actually similar?"
Recency bias: "The last few decisions went [well/badly]. Does that pattern hold here?"
Three questions, answered from the decision log:
Output format — concise, no filler:
📋 Decision Week in Review — [date]
Decisions made: 4
→ Technical (2): [brief labels]
→ People (1): [brief label]
→ Strategic (1): [brief label]
Worth watching: [one decision to revisit]
Pattern signal: [one pattern, if present]
Pull decisions from ~30 days ago. For each significant one, ask:
This is where learning happens. Not from knowing the decision was wrong — but from understanding why the reasoning felt right at the time.
Look across the full decision history and ask:
For every captured decision, set a follow-up based on type:
| Decision type | Follow-up trigger |
|---|---|
| Technical | 2–3 weeks after shipping |
| People | 4–6 weeks after the conversation or action |
| Strategic | End of quarter |
| Communication | 1 week after delivery |
Follow-up prompt (gentle, not interrogating):
"Three weeks ago you decided [WHAT]. How did that land?"
If the user answers, log the outcome alongside the original record. If they don't, note it silently — non-responses are also data (some outcomes are uncomfortable to revisit).
This skill reads from sage-cognitive's behavioral profile and writes back to it:
Reads:
Writes:
Coordination rule: If sage-cognitive's Mirror (Phase 2) already reflected a decision pattern this session, sage-decision-journal should not surface the same pattern. One mirror per day is enough.
ZIP package — ready to use