Hydrate memory is now on Gemini CLI
We shipped hydrate-mcp today. One Go binary, speaks MCP over
stdio, every MCP-capable agentic tool gets Hydrate memory. Proven
end-to-end on Gemini CLI in under an hour.
The real proof
No hypotheticals, no diagrams of what would happen. We registered
hydrate-mcp with Gemini CLI 0.26 using the one-command install,
then asked Gemini a single question scoped to a real project
(civichub-c) in the user's enterprise Hydrate. Gemini decided to
call hydrate_recall, got rendered facts back, and answered from
them.
$ gemini mcp add hydrate hydrate-mcp
$ gemini --prompt "Call hydrate_recall with query='database config' for project civichub-c"
[tool_use: hydrate_recall]
[tool_result]
The database name is 'postgres', and the schema name is 'civichub'.
The application uses a dedicated admin user 'civichub_admin' for DDL operations.
Migrations are tracked in the 'schema_migrations' table.
... That's a real fact, pulled from a real Hydrate store, through a real MCP handshake, delivered to a real Gemini session. No mocks. That specific line — "The database name is 'postgres', and the schema name is 'civichub'" — is a fact the user told Claude Code weeks ago, and today Gemini used it without a human ever pasting it in.
Why MCP, not a bespoke Gemini plugin
Every agentic coding tool has its own plugin surface. Claude Code has hooks. Cursor has its composer extensions. Cline has custom instructions. Writing one integration per tool is a tax you pay forever — each release cycle you're chasing six moving APIs.
Model Context Protocol skips that. It's Anthropic's open protocol, already adopted by Gemini CLI, Cursor, Cline, Continue, Windsurf, Zed, and Claude Desktop. You write the server once, speak MCP over stdio, and every MCP-capable client can consume your tools. One binary, N clients, one set of docs per tool for the one-line registration command.
hydrate-mcp is ~500 lines of Go. It exposes three tools
(hydrate_recall, hydrate_save_fact,
hydrate_list_projects) and reads its config from env vars at
call time — so the same binary serves Free-tier users on localhost and
Enterprise users against siteengine_ai.
What's next
The same binary, the same three tools, the same config pattern will light up on the rest of the MCP-capable tools. Expected this month:
- Cursor — MCP config via Settings → Features → Model Context Protocol.
- Cline — via its
cline_mcp_settings.json. - Continue.dev — via its
config.jsonMCP server entry. - Windsurf — via its MCP tool registry.
- Zed — via the
context_serverskey in settings. - Claude Desktop — via
claude_desktop_config.json.
Each one gets a short page under /docs/integrations with the exact config snippet. No new binary, no new protocol, no new maintenance burden per tool.
The honest caveat
MCP is tool-use: the model has to decide to call Hydrate. Claude Code's native hooks are auto-inject: they fire on every turn whether the model wants to or not. These are different patterns, and both are useful.
Auto-inject is better when you want zero friction and unconditional
continuity — it's why Claude Code is still our flagship integration.
Tool-use is better when you want the model to stay in control and only pull
memory when it needs it, which keeps context windows leaner. You can nudge
Gemini toward the auto-inject end by adding a system instruction to
~/.gemini/settings.json — the
Gemini CLI doc has the snippet
— but it's still the model's choice.
That's fine. Different agents have different personalities. Hydrate's job is to be the fact store they all reach for.
Install, register, prompt
Three commands, nothing else.
curl -fsSL gethydrate.dev/install | sh
gemini mcp add hydrate hydrate-mcp
gemini --prompt "Call hydrate_recall with query='tech stack' and summarise in 3 bullets"