Raindrop Workshop is the first local debugger for agents. It's free, local, and open source. Your local agent traces stream, token-by-token, instantly. Another Agent like Claude Code can read them over MCP. Then Claude can write evals, replay traces, fix bugs... and do it all over again. This is the Self-Healing Agent loop. And it’s only possible on Raindrop. Check it out and star on Github here: https://github.com/raindrop-ai/workshop
Raindrop Workshop launched on Product Hunt on May 14th, 2026 and earned 173 upvotes and 24 comments, placing #6 on the daily leaderboard. Raindrop Workshop is the first local debugger for agents. It's free, local, and open source. Your local agent traces stream, token-by-token, instantly. Another Agent like Claude Code can read them over MCP. Then Claude can write evals, replay traces, fix bugs... and do it all over again. This is the Self-Healing Agent loop. And it’s only possible on Raindrop. Check it out and star on Github here: https://github.com/raindrop-ai/workshop
On the analytics side, Raindrop Workshop competes within Open Source, Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1.1M followers on Product Hunt. The dashboard above tracks how Raindrop Workshop performed against the three products that launched closest to it on the same day.
Who hunted Raindrop Workshop?
Raindrop Workshop was hunted by Alexis Gauba. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Hey PH! This is Alexis, Co-Founder of Raindrop.
Your agent fails at 1am, traces are in some SaaS dashboard, harness is on your machine, eval suite is in a third place, and Claude sees none of it.
We've been stuck in this loop. So we built our way out of it. Workshop is the first sane way to debug your agent locally.
It has two parts: a local UI and an MCP.
Every span streams live to a local browser UI and you can replay any agent run with edited prompts, models, and tools.
The MCP lets you create self-healing eval loops. Claude Code reads your traces, writes evals, and fixes what's broken.
It's free, open source, and works with all the agent SDKs you already use.
One command to install: curl -fsSL https://raindrop.sh/install | bash
Excited to hear what you think!