Press & creators Try Hydrate. Run the tests. Write the truth.
This page is for journalists, YouTubers, and developers who want
to actually understand what Hydrate does before they
write or record about it. Everything you need - install commands,
walk-through scenarios that demonstrate the value, and a path to
the benchmark harness if you want to re-run our numbers - lives
here. No NDA. No commitment to publish. No marketing approval.
1. Try it (5 minutes)
Hydrate ships as a single binary plus two Claude Code hooks. Install,
register (locks the $5/mo Pro rate forever - see /beta),
open Claude Code in any project, work for an hour. Then stop, restart,
and watch the next session start with full context instead of a blank slate.
curl -fsSL gethydrate.dev/install | sh
hydrate register --edition=free --email=you@example.com
hydrate doctor # confirms hooks are wired
The Free tier is unrestricted by feature - it caps at 2 active
projects, that's the only difference. Pro features are unlocked free
during beta and for 30 days after v1 launches.
2. Why bother testing this?
You're paying for re-exploration
Without memory, every Claude Code session burns tokens
rediscovering the architecture you already explained last week.
Hydrate measures and removes that waste. Open the dashboard at
localhost:8089/dashboard and you can see
the displaced exploration tool calls in your own ledger.
Cheap models flip to flagship behaviour
Our reproducible finding: Haiku 4.5 with canon written into
CLAUDE.md resists architectural temptation as well
as Opus 4.7 does natively, at ~10% of the cost. This is testable
in 15 minutes with the walk-through below.
It runs entirely on your machine
No cloud service, no account, no telemetry on the Free tier.
You can verify this with lsof and tcpdump
- we explicitly want reviewers to check.
The findings are unusually crisp
Most "AI memory" pitches are vibes. Ours have numbers attached:
7-of-7 vs 2-of-7 ship rates, 40% → 80% canon resistance, $0.20
per shipped session vs $1.64. /benchmarks
has the full table.
3. Walk-through scenarios
Each scenario takes 10-30 minutes, costs under $5 in Anthropic API
credit, and produces a visible "before vs after" you can show on
camera or quote in writing. Pick one or run all three.
Scenario 1 · the "blank slate" demo Show that Claude Code forgets between sessions
- Pick any small project. Open it in Claude Code.
- Ask Claude to add a feature that requires it to learn the
codebase (e.g. "add a CSV export to the existing JSONL
command"). Let it explore and ship.
- Quit Claude. Wait. Open a fresh session.
- Ask it to add a related feature ("now add a TSV
export"). Watch it re-discover the same files
with the same tool calls. Note the cost.
- Run
hydrate sync --project=<your-project>
to write canon to CLAUDE.md. - Open another fresh session and ask the same question.
Notice it skips the re-exploration entirely. Compare cost.
What this proves: the Hydrate
hook + canon channel removes the "first 3 turns of every session
are wasted" tax that vanilla Claude Code charges you.
Scenario 2 · the "Haiku acts like Opus" demo Channel beats model tier
- Set up a project with a single architectural rule that's
tempting to violate (e.g. "do NOT add ORMs - we use raw
database/sql"). - Pin it:
hydrate canon add --project=demo "do NOT add
ORMs - use database/sql" - Run
hydrate sync --project=demo (writes canon
into CLAUDE.md as project instructions, not
soft context). - Open Claude Code with Haiku 4.5 selected. Ask it: "add
user CRUD - feel free to use whatever ORM you prefer."
- Watch Haiku refuse the temptation, citing the canon rule.
Compare with the same project but canon channel set to
UserPromptSubmit hook only (less authoritative): the
temptation usually wins.
What this proves: for
architectural authority, the channel matters more than the model
tier. Junior models inherit Opus-tier discipline when the rule
lives in the right place. The full benchmark for this is Scenario
C on /benchmarks.
Scenario 3 · the runway KPI How many extra days of coding did Hydrate buy you?
- Install Hydrate. Configure your subscription tier:
hydrate plan set --tier=pro (or max-5x, max-20x, payg). - Use Claude Code normally for a few days.
- Open
http://localhost:8089/dashboard. - Read the headline: "Hydrate gave you N extra days of
coding this week." The number is computed from your own
ledger - not a marketing estimate.
What this proves: the value
is visceral and personal. Most users see ~1-3 extra days/week
depending on workflow. If you're a Pro/Max user who hits weekly
caps, this is the metric you'll feel.
4. Reproduce our benchmarks
The published numbers on /benchmarks
(~30 million tokens, three scenarios, three models) come from a Go
benchmark harness. The harness is now packaged as a downloadable
press kit - source-available, no NDA, no commitment to publish.
Download the press kit
hydrate-press-kit-v1.tar.gz (~64 KB) SHA-256: 1dbc23f2c8c11cbbba6831cc51f72ddd8baec53790ccecf1a372de05cca535f1
Includes: bench-runner.sh wrapper · the lquery
scenario (7 sessions, full driver + grader + canon + fixtures) ·
the gazetteer scenario · brownfield + team prompts · README ·
source-available licence. Unpacks to ~300 KB.
You don't need an Anthropic API key
Most readers see "API spend" figures and assume you need to load up
a pay-as-you-go account. You don't. The harness
runs claude --print like any other Claude Code session,
and that respects whichever auth you already have:
- Claude Code subscription (Pro or Max) - the
tokens come out of your weekly subscription budget. Most
readers are here. No per-cell line-item on a bill.
- Anthropic API key (
ANTHROPIC_API_KEY
env var) - pay-as-you-go, billed at standard token rates. This
is what we used to produce the published dollar figures because
it gives clean per-cell attribution.
Token volumes published on /benchmarks apply to either auth path.
The dollar column is what an API-key user would pay; subscription
users see a comparable budget draw without an itemised bill.
Quick start
tar -xzf hydrate-press-kit-v1.tar.gz
cd hydrate-press-kit-v1
./bench-runner.sh check # confirms Claude + Hydrate are wired
# One scenario, one model, both configs:
./bench-runner.sh lquery haiku-baseline claude-haiku-4-5 baseline
./bench-runner.sh lquery haiku-hydrate claude-haiku-4-5 hydrate
./bench-runner.sh compare lquery haiku-baseline haiku-hydrate
One model × two configs across the lquery scenario is enough to
see the headline finding. Estimated draw: ~6 million tokens
(~$15 on a PAYG key, or roughly a meaningful chunk of a Pro
week). The full matrix in the published table is ~30M tokens.
Want a personal walkthrough first?
If you'd rather poke at Hydrate without committing to a video or
article, or you want help designing variant scenarios on your own
codebase, drop us a line. We'd genuinely rather you try it,
decide it's not for you, and tell us why - that's more useful
than a friendly review.
Send a heads-up email
5. Assets you can use
- Logo & brand:
https://gethydrate.dev/og-image.png for OG cards.
For SVG marks, email press@gethydrate.dev. - Stable links to numbers: /benchmarks#the-numbers ·
/benchmarks#artifacts ·
/compare (honest landscape, including
where Hydrate is the wrong fit).
- Plain-text version of every page: append
.md to any URL (e.g.
gethydrate.dev/benchmarks.md). Index at
/llms.txt. Good for citation. - Architecture & thesis-length writeup: /docs (technical) and the master thesis at
docs/thesis/thesis.md in the press kit (email). - Press release / talking points: we don't have
a polished pitch deck. The page you're reading is it. If you
need a bullet list for an editor, copy from
section 2.