Liminary turns everything you’ve saved into working memory for AI. Unlike chatbots, meeting tools, or project-based notebooks, it gives your knowledge one shared memory across writing, meetings, and research. It surfaces relevant context automatically as you work, helping expert knowledge workers reuse their best thinking, avoid starting from scratch, and produce source-grounded work with traceable citations.
Hey Product Hunt 👋 I'm Sarah, founder of Liminary.
I led ML engineering for Dropbox. Semantic search, retrieval, and Dropbox's first generative AI integrations. I built Liminary out of personal frustration: storage is archival. I couldn't save articles, meeting notes, and the useful AI conversations in one place, and then on top of that, I'd never see any of it again. Lost in closed tabs, various note taking apps, emails, and AI chats.
AI tool proliferation made it worse, not better. Every new model meant re-benchmarking, redoing workflows, re-feeding context. As a builder, I believe users should get the best model for the job, not chase whichever one shipped this week.
But there's a deeper problem beneath both of those: every AI tool you use is working from what the model thinks is relevant. Trained on the internet, guessing at your context. Not what you've decided matters. That's the gap.
Our team at Liminary is all ex-Dropbox and ex-Google. We built Liminary to close that gap: the memory layer for your AI work. You decide what goes in: files, web pages, YouTube videos, LLM transcripts, Gmail threads. Your AI works from that. Always.
Liminary lives across the surfaces where you work: a browser extension, a writing sidekick in Google Docs, a meetings layer, and a place where everything you save lives and connects.
Three things Liminary does that no other tool can:
Proactive recall. The right knowledge surfaces at the moment of work. You don't search. It finds you.
In-context fact-check and Gap detection. As you write in Google Docs, Liminary validates claims against your own library, finds what’s missing from the research you already did or the information your clients already shared with you. Not the web, not training data.
Meeting recall, live. No bot in the room. When someone says "Project Atlas," your notes already read "Project Atlas with Alice and Bob [source]." Other meeting tools take notes. Liminary connects what's said to everything you already know.
Built for people who bill for their perspective: independent consultants, fractional leaders, VC analysts and strategists. In a world where everyone uses the same models, your edge is what those models are grounded in.
The work looks like this: you keep ambient context on a small set of clients, accounts, companies, or topics you think about repeatedly. You research them. You meet about them. You produce deliverables about them. Liminary connects all three, so the research, the meetings, and the writing all work from the same knowledge.
What's the one piece of context you wish your AI actually remembered?
The proactive recall idea feels genuinely useful because most knowledge tools still depend on users remembering what to search for, so surfacing relevant context automatically feels genuinely useful. How Liminary handles situations where saved sources conflict with each other. Does it show both perspectives or mainly prioritize the one it considers most relevant?
The interesting part here isn’t “AI memory” itself, it’s grounding everything in sources you actually chose to save.
Most AI tools still feel like they’re guessing your context half the time. Congrats on the launch guys!
Congratulations on the launch! I've been a beta user for months! What I like about Liminary is that it is not just a place to save links and forget them.
I use it throughout the day to save articles, emails, Substacks, and other sources I want to come back to. I can pull out notes as I go, organize things by theme, and then revisit them later in a way that actually helps me see connections.
The weekly summary is one of my favorite features. It helps me spot patterns, trends, and even contradictions I might have missed when I was reading things one by one.
Plus--the @Liminary team is amazing, super responsive and helpful!
I've been thinking about this exact problem. I built a persistent memory system for my AI agents — each one maintains its own JSON file tracking known issues, trends, and changelog — and the coordination between agents reading each other's memories was the hardest part to get right.
The "source-grounded with traceable citations" angle is smart. Most AI knowledge tools lose the provenance chain and you end up not trusting the suggestions. Does Liminary handle conflicting information from different sources?
Finally something that actually works to bring together the context mess I've created across my digital universe!
If I type something into ChatGPT, will your service see or remember it? Or does it only work with documents from my computer?
How does your product companies to the Dreams feature of Anthropic.
To be sure, some of those features should be LLM provider agnostic.
Congrats on the launch. Grounding AI in saved knowledge feels like the right direction, especially for work where the answer depends on private context rather than general internet knowledge.
The hard part I’d be curious about is conflict resolution. Once people save enough snippets, docs, examples, and notes, some of that context will be stale or contradictory. Does Liminary have a way to show which saved source influenced the answer, or to rank “this is current policy” above “this was a random note from six months ago”?
For me, trust in grounded AI comes less from having more context and more from knowing which context won.
Strong work on the extraction architecture. I'm curious on how you handle data sovereignty for consultants with NDA'd client materials—is processing local, or do you have isolated tenant architectures? Consultants, for example, need strict boundaries between client A's data and client B's data, not just document-level permissions. I believe engagement-level isolation would matter more than document-level permissions here.
The 'ground in saved knowledge' framing solves the part everyone hand-waves. I lose 20 minutes a day re-pasting the same context blocks into different chats. Curious how you avoid the typical RAG failure mode where the model picks the longest snippet over the most relevant one. Reranker step or pure embedding retrieval?
Interesting idea, but I keep thinking about whether "always-on context" actually improves thinking or just adds more noise.
How does it handle conflicting versions of the same idea across different notes or time periods?
Feels like the hardest part here is not retrieval, but knowing what not to bring into the moment.
In real workflows, do people actually maintain structured "knowledge sets." or does it become messy over time?
Does the system ever surface too much context and slow down decision-making instead of helping it?
I wonder if users end up trusting the surfaced context too much, even when it's slightly off.
This feels powerful, but I'm curious how often it pulls "technically relevant" context that's actually not useful in practice.
Congratulations!
The real value, to me, is not saving knowledge, but making past thinking reusable at the exact moment it matters.
how do you handle memory hygiene over time, especially when old context becomes outdated or no longer reflects the user’s current thinking?
About Liminary on Product Hunt
“Ground your AI in saved knowledge as you work”
Liminary launched on Product Hunt on May 13th, 2026 and earned 150 upvotes and 47 comments, placing #8 on the daily leaderboard. Liminary turns everything you’ve saved into working memory for AI. Unlike chatbots, meeting tools, or project-based notebooks, it gives your knowledge one shared memory across writing, meetings, and research. It surfaces relevant context automatically as you work, helping expert knowledge workers reuse their best thinking, avoid starting from scratch, and produce source-grounded work with traceable citations.
Liminary was featured in Chrome Extensions (52.6k followers), Productivity (651.7k followers) and Artificial Intelligence (468.5k followers) on Product Hunt. Together, these topics include over 238.2k products, making this a competitive space to launch in.
Who hunted Liminary?
Liminary was hunted by Ben Lang. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Liminary stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt 👋 I'm Sarah, founder of Liminary.
I led ML engineering for Dropbox. Semantic search, retrieval, and Dropbox's first generative AI integrations. I built Liminary out of personal frustration: storage is archival. I couldn't save articles, meeting notes, and the useful AI conversations in one place, and then on top of that, I'd never see any of it again. Lost in closed tabs, various note taking apps, emails, and AI chats.
AI tool proliferation made it worse, not better. Every new model meant re-benchmarking, redoing workflows, re-feeding context. As a builder, I believe users should get the best model for the job, not chase whichever one shipped this week.
But there's a deeper problem beneath both of those: every AI tool you use is working from what the model thinks is relevant. Trained on the internet, guessing at your context. Not what you've decided matters. That's the gap.
Our team at Liminary is all ex-Dropbox and ex-Google. We built Liminary to close that gap: the memory layer for your AI work. You decide what goes in: files, web pages, YouTube videos, LLM transcripts, Gmail threads. Your AI works from that. Always.
Liminary lives across the surfaces where you work: a browser extension, a writing sidekick in Google Docs, a meetings layer, and a place where everything you save lives and connects.
Three things Liminary does that no other tool can:
Proactive recall. The right knowledge surfaces at the moment of work. You don't search. It finds you.
In-context fact-check and Gap detection. As you write in Google Docs, Liminary validates claims against your own library, finds what’s missing from the research you already did or the information your clients already shared with you. Not the web, not training data.
Meeting recall, live. No bot in the room. When someone says "Project Atlas," your notes already read "Project Atlas with Alice and Bob [source]." Other meeting tools take notes. Liminary connects what's said to everything you already know.
Built for people who bill for their perspective: independent consultants, fractional leaders, VC analysts and strategists. In a world where everyone uses the same models, your edge is what those models are grounded in.
The work looks like this: you keep ambient context on a small set of clients, accounts, companies, or topics you think about repeatedly. You research them. You meet about them. You produce deliverables about them. Liminary connects all three, so the research, the meetings, and the writing all work from the same knowledge.
What's the one piece of context you wish your AI actually remembered?
Early days. Honest feedback welcome: liminary.io
~ Sarah and the Liminary Team