Compare · 2026-04-19 How Hydrate compares to everything else.
The "memory for AI" category is crowded, and most of the tools
you will see aren't actually solving the same problem. This page
names the real competitors, the adjacent tools, and the cases
where a different product would serve you better. No marketing
hedging - specific yes/no on each axis.
One-sentence read
Hydrate is the only tool in this landscape that is
local-first, writes
authoritative canon into CLAUDE.md,
syncs across teams via git alone (no cloud infra),
and publishes reproducible benchmarks showing
cheaper models reaching flagship-model behaviour
on architecture-sensitive work. If any two of those matter to you,
we're the right fit. If none of them do, the table below names
the tool you probably want instead.
The comparison matrix
Every row is a real product. Every column is a question a buyer
asks. "-" means the product doesn't serve that axis, not that it's
bad at it.
| Product | Category | Hosting | IDE / assistant integration | Architectural canon to CLAUDE.md | Team sync | Pricing | Self-host |
| Hydrate | local memory layer | local | hooks (Claude Code) + MCP | yes - hydrate sync → CLAUDE.md | git (zero infra) | $0 / $9 / $29 / custom | yes |
| Pieces | OS-level developer memory | local (optional cloud) | MCP to Claude, Cursor, Copilot, Goose | no | cloud only | freemium, not shown | no |
| Supermemory | memory-as-a-service | cloud SaaS | plugin - Claude Code, Cursor, OpenCode, OpenClaw (Pro+) | no | cloud | $0 / $19 / $399 / custom | enterprise only |
| Mem0 | memory infra for agent builders | cloud + self-host (K8s, air-gap) | - | no | enterprise only | not shown | yes |
| Zep | graph memory for AI agents | cloud (Graphiti is open) | - | no | cloud | free tier + paid (undisclosed) | partial |
| Letta (MemGPT) | agent-building framework | local-first | - | no | no | free + managed (undisclosed) | yes |
| Windsurf | full IDE with Cascade agent | cloud | is the IDE | no | via the IDE | freemium + paid | no |
| Sourcegraph Cody | codebase-search assistant | cloud (on-prem enterprise) | VS Code, JetBrains, web | no | enterprise search index | free + enterprise | enterprise only |
| Claude Projects | native Claude.ai project knowledge | cloud (Anthropic) | Claude desktop/web only | no (soft project instructions) | workspace seats | part of Pro / Team | no |
Per-tool honest read
Pieces · the closest competitor
Pieces is the one tool on this page that shares Hydrate's core
positioning: local-first memory for developers using LLM
coding assistants. Their "LTM-2" engine claims nine
months of on-device searchable memory, and they integrate via
MCP with Claude, Cursor, GitHub Copilot, and Goose. They capture
context OS-wide - IDE, browser, docs, chat - not just coding
sessions.
Where they win over us: breadth. If you want memory that spans
your whole computer, not just Claude Code, Pieces is the
product. Their capture surface is wider.
Where Hydrate wins: architectural canon via
CLAUDE.md (measured to flip Haiku from
silent compliance to Opus-class pushback on adversarial prompts;
see our benchmarks), git-based
team sync with zero infrastructure (Pieces team sync
goes through their cloud), and published
benchmarks you can reproduce yourself with an API key
and about $85 of credit.
Pricing comparison is impossible: Pieces doesn't publish tier
figures. Our Free tier at $0 is a known quantity.
Supermemory · horizontal memory SaaS
Supermemory ships memory as a cloud service. Free
tier at 1M tokens/mo, Pro at $19/mo for 3M, Scale at $399 for
80M, Enterprise custom. Supports multi-modal content (PDFs,
audio, emails, web crawlers) and lists Claude Code, Cursor,
OpenCode, and OpenClaw as Pro-tier plugins.
Where they win over us: content breadth. Supermemory is
designed to be the memory of your professional life - emails,
reports, documents, browser history - not just coding sessions.
If your primary memory need is across productivity surfaces,
they're closer to that than we are.
Where Hydrate wins: data custody (Supermemory
holds your data; we don't - nothing leaves your laptop),
per-seat pricing (their token-metered model is
$19/mo for 3M tokens; ours is $9/mo with no token meter at all),
Claude Max subscription doesn't get double-billed
(Supermemory charges per token you send; we charge per seat
regardless of Claude usage), and the CLAUDE.md canon
channel.
Honest read: if you need memory across emails/PDFs/web *and*
Claude Code, Supermemory covers more ground. If you need memory
only for coding and care about keeping your data off cloud
infrastructure, we're the better fit.
Mem0 · memory infra for agent builders
Mem0 solves a different problem. They're aimed at developers
building AI agents - chatbots, customer-service bots,
assistants - and give those agents a persistent memory via
Python/JavaScript SDK. They claim 80% token reduction through
memory compression, cite 100,000+ developers, and support
enterprise self-hosting on Kubernetes or air-gapped servers.
Not a real competitor for the same buyer. If you're building
an agent, Mem0 is a sensible choice. If you're a developer
using Claude Code and want your own assistant to remember what
you built yesterday, Mem0 doesn't address your use case -
there's no IDE or coding-assistant integration published.
Zep · graph memory for AI agents
Zep builds a temporal knowledge graph of an agent's context -
entities, relationships, fact invalidation. Their open-source
core (Graphiti) is available; the managed service is cloud.
They target teams building agents, not developers using coding
assistants.
Same positioning as Mem0: adjacent market, not direct
competition. Worth knowing about if your engineering work
includes building agents in addition to using LLM coding
assistants - the two use cases can be served by different
tools without much overlap.
Letta (formerly MemGPT) · agent framework
Letta is the heir to the 2023 MemGPT paper - a local-first
framework for building personalised agents with persistent
memory. Open source. CLI and desktop app. Very solid technical
foundation.
Again, adjacent: Letta is for building agents, not for using
coding assistants. If you want to build a custom agent that
remembers things and pays no per-seat fee, Letta is the right
tool. It won't help your Claude Code session remember
yesterday's refactor.
Windsurf · full IDE with baked-in memory
Windsurf is a complete editor (formerly Codeium) with an agent
called Cascade, autonomous terminal execution ("Turbo Mode"),
and a "Memories" feature that remembers things about your
codebase and workflow. They compete by owning the editor.
Where they win: if you want an integrated everything-in-one
experience and you're happy to switch editors, Windsurf is a
complete offering. The memory is one feature inside a bigger
product.
Where Hydrate wins: editor independence. Our
users stay on Claude Code, Cursor, Zed, Cline, or Gemini CLI -
whichever editor they already have. Hydrate is a memory layer,
not an editor replacement. You can adopt Hydrate without giving
up anything. Adopting Windsurf means switching editor.
Sourcegraph Cody · codebase-search assistant
Cody's differentiator is deep integration with Sourcegraph's
search over massive codebases. It's the right choice for
enterprise engineering teams working on millions of lines of
code across many repositories, where "find the right code" is
the bottleneck.
Not really a memory tool. Cody fetches relevant context from
your codebase on demand; it doesn't remember what you decided
yesterday or maintain architectural canon. Different axis.
Claude Projects · Anthropic's native feature
Claude.ai has a "Projects" feature where you can upload
documents and set custom instructions per project. For a
consumer or light-use case on the Claude web/desktop product,
it is genuinely useful and already paid for by your Pro or Team
subscription.
Where it falls short for Hydrate's target user: Claude
Projects doesn't work with Claude Code. The CLI tool
developers use for real coding work sits outside the Projects
surface; whatever you've uploaded into your Claude.ai project
is not visible to claude --print. Hydrate's whole
reason for existing is the gap between "the Claude product"
and "the Claude Code CLI". Projects doesn't bridge it.
What developers actually complain about - and what we do about it
A recent thread in r/Rag
captured the state of the market: one CTO said they use Supermemory only because "there are
no other good alternatives" and the customer experience "is really bad". The OP named three
specific failures of current memory products. We address each of them directly, with shipped code:
"Memory junk - memory getting filled with the same repetitive information"
This is the complaint levelled most often at Mem0: the extractor re-emits near-duplicate
facts every session and the store bloats until retrieval produces the same mush back at
the model.
Hydrate's counter: content-hash dedup at insert time (identical text can
never re-enter), Jaccard-0.80 near-duplicate dedup at injection time
(facts that paraphrase a previously-injected fact are skipped), pinned canon
for architectural rules that must not be overwritten by later extractions, and
an extractor hallucination filter that drops language/toolchain claims
the session transcript never mentioned. The last one is a direct response to extractor
failure modes we found in our own Scenario C benchmark.
"Agents lose context as the thread grows"
Within a long session, Claude Code's own context window fills and earlier decisions get
compacted out. Memory layers that only inject at session start don't help.
Hydrate's counter: the UserPromptSubmit hook fires on every
prompt, not just session start, so relevant facts are re-injected after
compaction would have dropped them. Session summaries are stored at three compression
tiers (16 / 32 / 64 tokens) so future sessions pull the compact version, not the raw
transcript. And canon in CLAUDE.md sits in project instructions - above
the conversation - so it survives every compaction automatically.
"Not providing the right context at the right time when the corpus is changing"
Memory layers that treat facts as write-once serve stale state when the ground truth
moves.
Hydrate's counter: Ebbinghaus-inspired decay with spacing effect
(frequently-accessed facts stay strong, ignored facts fade), pinning protects
canon from being overwritten by new extractions, embedding-based
relevance ranking at every injection (not cached), and hydrate sync
is idempotent - when you edit canon, the next sync rewrites CLAUDE.md
instantly with no manual cleanup.
"Mem0 wasn't plug-and-play - format wrangling, extra LLM call for retrieval, latency too slow"
A commenter on the same thread (score 2) described building a custom internal memory
layer because Mem0 required schema generation + query rephrasing on every retrieval -
an extra LLM call in the hot path.
Hydrate's counter: install is one curl command, 60 seconds end-to-end.
Retrieval at prompt time is one local server call - vector cosine
plus rank, no LLM. Typical latency under 100 ms. No format wrangling:
facts are strings with a category tag. The thread commenter is describing the custom
in-house build they shipped in place of Mem0. We're shipping that as a product.
What Hydrate uniquely does
The matrix above shows overlap at the surface. Below are the four
specifically-novel capabilities none of the other products in the
landscape ship:
- Authoritative canon via
CLAUDE.md.
The hydrate sync command writes pinned architectural
rules into your project's CLAUDE.md, which Claude
Code reads as project instructions - the same authority tier as
on-disk code. Our Scenario C benchmark (2026-04-19) measured
Haiku 4.5 refusing canon-violating adversarial prompts at the
same rate as Opus 4.7, for roughly one tenth of the Opus cost.
Competitors deliver memory only as soft context; none write
into the authoritative channel.
- Zero-infrastructure team sync via git.
The Team tier uses your existing private git repository to
share canon. A senior developer pins canon on their laptop,
runs
hydrate sync, commits the resulting
CLAUDE.md block, pushes. Every teammate's
git pull brings the new canon to their machine;
Claude Code reads it on next session. No Hydrate cloud service
is involved. This is uniquely cheap - we charge per seat, not
per API call into a central store - and uniquely private -
your team's canon never traverses a third party's servers.
- Measured tier substitution.
We publish reproducible benchmarks showing that
cheaper or older models, given the right memory and canon,
match the behaviour of the flagship tier on
architecture-sensitive work. Nobody else in the memory-layer
landscape has done this measurement publicly. See the full
results on benchmarks.
- The Runway KPI.
Hydrate translates the tokens it saves into
"extra days of coding this week" - a headline KPI calibrated to
your Anthropic Max plan's capacity. You see the saving as
additional productive time, not as a dollar figure on a page
you have to interpret. It's a small design choice that
materially changes how developers perceive the product's value.
When Hydrate isn't the right pick
We're not the answer to every memory question. Straight honesty:
-
If your memory need spans emails, PDFs, audio, web
history, and calendar, not just coding sessions, then
Pieces (local) or Supermemory (cloud) will give you more.
-
If you're building an agent rather than using
a coding assistant, use Letta, Mem0, or Zep. They're built for
that job.
-
If you already use an IDE like Windsurf or Cursor and are
happy to take their in-product memory, and you do not need
canon resistance or local-first data custody, use the memory
feature that's bundled with your IDE.
-
If you work exclusively in Claude.ai (web / desktop) and not
Claude Code (CLI), use Claude Projects. It's already paid for
and sufficient.
-
If your primary need is codebase search across millions
of lines, use Sourcegraph Cody. Memory isn't your
problem; find-the-right-code is.
Where we fit
Hydrate is for the developer who uses Claude Code (or Cursor /
Zed / Cline via MCP) on a laptop and wants that tool to
remember decisions across sessions, carry architectural canon
that resists contradiction, sync across a small team with no
SaaS dependency, and not cost anything on the Free tier. If
that is you, install Hydrate.
Everything on the Free tier is free forever.