Skip to content

findings

The curated synthesis layer at ~/garmin-warehouse/findings/. Casey's hand-edited markdown distilling research-corpus signal + personal training data into recommendations and protocols. Layer 8 of the warehouse architecture (the human-readable cap).

Quick orientation

  • Location: ~/garmin-warehouse/findings/
  • Format: One .md file per topic (e.g. bloodwork_baseline.md, threshold_productivity.md, daily_nutrition.md)
  • Index: findings/README.md
  • Workstream state: findings/_active.md (read by training-data-coach subagent on every session start)
  • Personal todo: findings/_todo.md (surfaced in UI at /active)
  • Archive: findings/_archive/YYYY-MM.md (closed work moved here per-month, ~600-line cap)
  • Auto-generated cross-refs: findings/_followups_summary.md (PubMed cited-by graph over the warehouse's primary sources)
  • Count (2026-05-04): 19 findings, mostly running-physiology + fueling + bloodwork-protocol topics

What goes in findings/ vs the kb

findings/*.md kb/kb.duckdb
Curated synthesis Casey edits in his editor Auto-extracted claims + studies from podcasts
Hand-written narratives, recommendations, protocols Structured rows: claims, episodes, studies
19 files 5,555 claims, 472 studies
Source of truth for "what does Casey do about iron" Source of truth for "what did the SWAP guys say about creatine"
Layer 8 (synthesis) Layer 5 (kb)

The kb is input to findings: triage walks watches.yaml + the kb embeddings to surface candidate claims for each finding, Casey applies or dismisses, the relevant ones get woven into the markdown.

Reading order for a new finding

When investigating a topic Casey hasn't written about:

  1. findings/README.md — does an existing finding cover this?
  2. kb/query.py semantic "<topic>" — what's in the corpus?
  3. kb/query.py contradictions — any debated claims?
  4. kb/query.py topic <name> — full per-topic deep-dive

If there's enough signal to write something:

  1. Create findings/<topic>.md (lowercase + underscores)
  2. Add to findings/README.md index
  3. Add a watches.yaml entry with the topic + match patterns so future kb claims surface automatically in triage
  4. Update findings/_active.md to track the workstream

Conventions

See reference/findings-conventions.md for the full per-file format. Highlights:

  • First H1 = title (used in graph view, kb findings.title column, morning summary's recent_findings cache)
  • Topic chips<!-- topics: iron, ferritin, fatigue --> near top
  • Cross-refs — markdown links to other findings (e.g. [bloodwork_baseline](bloodwork_baseline.md)) get picked up by kb/load.py into the xref table
  • Cited studies — link to PMID URLs: [42041237](https://pubmed.ncbi.nlm.nih.gov/42041237/) — the followups-summary script picks these up too
  • Honest caveats section — every finding has one: "what would break this conclusion, what's contested, what hasn't been tested on me"

What this directory is NOT

  • ❌ Not a notebook for raw observations — that's findings/_active.md "open questions" or _todo.md "things to investigate"
  • ❌ Not a journal — daily training observations go through the morning-summary corrections flow (see ADR 003)
  • ❌ Not auto-generated — _followups_summary.md is the only auto file; everything else is hand-edited
  • ❌ Not the kb — claims and studies live in kb/kb.duckdb

Auto-archive convention

_active.md grows over time as workstreams ship. Trim closed work to _archive/YYYY-MM.md per-month, ~600-line cap on _active.md. See feedback memory.

Triage state files

Adjacent to findings (live in kb/, not findings/):

  • kb/watches.yaml — declarative: "for finding X, watch for kb claims matching pattern Y"
  • kb/applied.yaml — append-only: "claim Z was applied to finding X on date D"
  • kb/dismissed.yaml — append-only: "claim Z is not relevant to finding X"

kb/triage.py (TUI) and kb/review.py (queue builder) read these. Both stay rebuildable across kb regenerations because the YAMLs are durable.