Product Thumbnail

Agentmemory

Persistent memory for Claude Code, Codex & coding agents

Open Source
Developer Tools
Artificial Intelligence
GitHub
Visit WebsiteSee on Product HuntGithubTwitter

Hunted byfmerianfmerian

You can now give Hermes, Claude Code, and Codex infinite memory. Agentmemory is trending on GitHub with 5,000+ Stars. CLAUDE md dumps 22,000+ tokens into context at 240 observations agentmemory: 1,900 tokens. same observations. 92% less. At 1,000 observations, 80% of your built-in memories become invisible. agentmemory keeps 100% searchable. benchmarked on 240 real coding sessions → Up to 95% fewer tokens per session → 200x more tool calls before hitting context limits → 100% open source

Top comment

Hey Product Hunt 👋

I built AgentMemory because coding agents still have one painful limitation: they forget between sessions.

  • You explain your architecture once.

  • You debug a production issue once.

  • You decide on a library or pattern once.

Then the next session starts from zero again.

AgentMemory gives AI coding agents persistent memory across sessions, so they can actually build on what they’ve already learned about your codebase. It automatically captures what your agent does, compresses it into structured memories, indexes them with hybrid search, and injects the right context back into future sessions.

It works with Claude Code, Cursor, Codex CLI, Gemini CLI, Windsurf, Kilo Code, OpenCode, Cline, Roo, Goose, Aider, Hermes, OpenClaw, and basically any MCP or REST-capable agent.

From day one, I wanted it to be:

  • 100% open source

  • Free to run locally

  • No external database required

  • Works via MCP, REST, and simple hooks

  • Built for real coding workflows, not toy “chat history” memory

On benchmarks, AgentMemory gets 95.2% R@5 and 98.6% R@10 on the LongMemEval-S retrieval suite using BM25 + vector search, while cutting context usage by around 92%.

Quick start:

Run: npx @agentmemory/agentmemory

If you live in your coding agents every day, this is for the moment you think: “Wait, I already explained this yesterday.”

Would love feedback from builders, heavy agent users, and open‑source maintainers.

GitHub: https://github.com/rohitg00/agentmemory

Comment highlights

The context compression angle is genuinely interesting — 22k tokens down to 1.9k is a meaningful difference. Curious how it handles prioritisation when observations span very different task types (e.g. a debugging session vs. greenfield architecture work). Does it keep those namespaced, or blend into one pool?

Hey! Love it. How well would it help with handling pivots and knowing how my seed-stage startup's narrative/pitch deck and product spec changes over time? I've got canonical documents set up in Cursor, but it still takes a LOT of tidying work and any new scratch brainstorming files ruin the source of truth...

My claude code desktop fails to connect after installing this. :-(

Cool project, how are you handling caching to ensure that it doesn't reprocess tokens unnecessarily in longer conversations?

Persistent memory for coding agents is a harder problem than it sounds. You're not just storing conversation history, you're storing codebase context, decisions made, patterns established. The benchmark claim is what I'd want to dig into. Memory that's fast to write is useless if retrieval is noisy. How does it handle context that's become stale after a refactor?

Congrats on the launch.

2 questions:

  1. Will this impact more usage on tokens? since the agent need looking around and search on newer chats?

  2. Will the memory be persistent only in CLI agents or also on their desktop application as Codex, Claude, Cursor

Congrats on the launch.

2 questions:

  1. Will this impact more usage on tokens? since the agent need looking around and search on newer chats?

  2. Will the memory be persistent only in CLI agents or also on their desktop application as Codex, Claude, Cursor

Really interesting, can it pick up past sessions or does it start only once i integrate? On another side note is there a way to not use an agentic db and maybe postgres?

Great traction! I will give it a try on my current project and see if it brings down hallucination. I like the graph view so you can easily see whats going on.

Just wondering how long did this take to make? The database side is very interesting and i think has a lot of potential to many other things. Good luck!

The cross-session forgetting problem is real. The deeper one you'll hit at scale: when an agent makes a wrong call in week 4 because it remembered a misleading decision from week 1, where does ownership of that mistake sit? Two questions worth thinking about: 1. Can memory be exported in an open format so agents move with their user, not their runtime? 2. Is there a way to mark a memory entry as disputed or superseded? Without those, an agent's persistent memory becomes a liability dressed as a feature.

92% token reduction is huge if it holds on real codebases. Curious how agentmemory handles conflicting observations: when newer context contradicts older stored memory, does recency win automatically or is there a manual override?

Persistent memory across sessions is one of those things that sounds like a dev tool problem but actually changes how useful AI agents are in practice. Right now every session with Claude Code starts from scratch — re-explaining context, re-loading preferences. Curious how Agentmemory handles conflicts when the same context gets updated across sessions. Does it merge, overwrite, or flag it for review?

Well done team! How do you detect when a stored memory contradicts current code state or is pruning still manual?

well done @rohit_ghumare i'd love to know what's the business model you intend to persue? looks like everything is free and opensource. just wondering would u be making this a hobby project or building it seriously or something else?

Wonderful project. Already used it locally with Claude Code and it provides an amazing developer experience. Absolutely love the underlying architecture powered by iii = very scalable. very efficient and hands down the best memory solution otu there

About Agentmemory on Product Hunt

Persistent memory for Claude Code, Codex & coding agents

Agentmemory launched on Product Hunt on May 16th, 2026 and earned 212 upvotes and 29 comments, earning #2 Product of the Day. You can now give Hermes, Claude Code, and Codex infinite memory. Agentmemory is trending on GitHub with 5,000+ Stars. CLAUDE md dumps 22,000+ tokens into context at 240 observations agentmemory: 1,900 tokens. same observations. 92% less. At 1,000 observations, 80% of your built-in memories become invisible. agentmemory keeps 100% searchable. benchmarked on 240 real coding sessions → Up to 95% fewer tokens per session → 200x more tool calls before hitting context limits → 100% open source

Agentmemory was featured in Open Source (68.4k followers), Developer Tools (512.4k followers), Artificial Intelligence (468.5k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 194.2k products, making this a competitive space to launch in.

Who hunted Agentmemory?

Agentmemory was hunted by fmerian. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

Want to see how Agentmemory stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.