This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
Product upvotes vs the next 3
Waiting for data. Loading
Product comments vs the next 3
Waiting for data. Loading
Product upvote speed vs the next 3
Waiting for data. Loading
Product upvotes and comments
Waiting for data. Loading
Product vs the next 3
Loading
Distill[bot]
Stop drowning in duplicate AI code review feedback
Using multiple AI code reviewers (CodeRabbit, Copilot, Gemini) can give better feedback but creates noise hell — dozens of duplicate comments on a PR. Distillbot consolidates all that feedback into clean, unified reviews. It aggregates duplicates, eliminates spam, and preserves what actually matters. Most AI reviewers stop at the PR; Distillbot also reviews every pushed commit, persists findings, and syncs them to GitHub issues and check runs.
I run CodeRabbit, Copilot, and Gemini on my repos. Better feedback, in theory. In practice? 50+ comments per PR. Three bots flagging the same missing null check. Two suggesting opposite refactors. Real bugs buried under nits about variable naming.
I started ignoring all of it. Which defeats the point.
What Distillbot does:
Watches your PR for bot activity
Waits for the noise to settle (debounce)
Deduplicates overlapping comments into one consolidated review
Splits findings into errors / warnings / suggestions with source attribution
Collapses the original bot spam behind a consolidation notice
Leaves human discussion threads alone
Late bot comments? Posted as follow-ups so nothing gets lost
The bonus most reviewers miss:
Every other AI reviewer stops at the PR. Distillbot also reviews every pushed commit — findings land in a Commit Review Inbox and sync to a GitHub issue + check run, so post-merge regressions actually get tracked instead of disappearing into git history.
Setup: Install the GitHub App, flip on the repos you want, done. Zero config required.
Would love feedback — especially from teams running 2+ AI reviewers. What noise patterns drive you nuts? What would you want consolidated that I'm not catching yet?
Happy to answer anything in the comments today. 🙏
About Distill[bot] on Product Hunt
“Stop drowning in duplicate AI code review feedback”
Distill[bot] was submitted on Product Hunt and earned 3 upvotes and 1 comments, placing #142 on the daily leaderboard. Using multiple AI code reviewers (CodeRabbit, Copilot, Gemini) can give better feedback but creates noise hell — dozens of duplicate comments on a PR. Distillbot consolidates all that feedback into clean, unified reviews. It aggregates duplicates, eliminates spam, and preserves what actually matters. Most AI reviewers stop at the PR; Distillbot also reviews every pushed commit, persists findings, and syncs them to GitHub issues and check runs.
On the analytics side, Distill[bot] competes within Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1M followers on Product Hunt. The dashboard above tracks how Distill[bot] performed against the three products that launched closest to it on the same day.
Who hunted Distill[bot]?
Distill[bot] was hunted by Chris Jones. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
For a complete overview of Distill[bot] including community comment highlights and product details, visit the product overview.
Hey Product Hunt! 👋
I'm the solo dev behind Distillbot.
The problem that made me build this:
I run CodeRabbit, Copilot, and Gemini on my repos. Better feedback, in theory. In practice? 50+ comments per PR. Three bots flagging the same missing null check. Two suggesting opposite refactors. Real bugs buried under nits about variable naming.
I started ignoring all of it. Which defeats the point.
What Distillbot does:
Watches your PR for bot activity
Waits for the noise to settle (debounce)
Deduplicates overlapping comments into one consolidated review
Splits findings into errors / warnings / suggestions with source attribution
Collapses the original bot spam behind a consolidation notice
Leaves human discussion threads alone
Late bot comments? Posted as follow-ups so nothing gets lost
The bonus most reviewers miss:
Every other AI reviewer stops at the PR. Distillbot also reviews every pushed commit — findings land in a Commit Review Inbox and sync to a GitHub issue + check run, so post-merge regressions actually get tracked instead of disappearing into git history.
Setup: Install the GitHub App, flip on the repos you want, done. Zero config required.
Would love feedback — especially from teams running 2+ AI reviewers. What noise patterns drive you nuts? What would you want consolidated that I'm not catching yet?
Happy to answer anything in the comments today. 🙏