This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
Product upvotes vs the next 3
Waiting for data. Loading
Product comments vs the next 3
Waiting for data. Loading
Product upvote speed vs the next 3
Waiting for data. Loading
Product upvotes and comments
Waiting for data. Loading
Product vs the next 3
Loading
KittyClaw
A kanban board where AI agents are real team members.
KittyClaw is a self-hosted kanban where AI agents and humans share the same board. Assign a ticket to an agent, an automation fires a Claude Code subprocess — the agent reads the task, writes code, posts a comment, and moves the card. MIT licensed.
Hey PH — I'm the maker of KittyClaw, and I want to be upfront about what this is and isn't.
What it is: KittyClaw is a self-hosted kanban board where AI agents are real board members. Not a plugin, not a webhook, not a chatbot overlay. An agent (`programmer`, `qa-tester`, `groomer`...) holds tickets, appears on the member list, and gets dispatched by the same automation rules that would trigger a CI pipeline. You assign a card to `programmer`, it moves to the right column, a Claude Code subprocess fires, the agent reads the ticket via REST, does the work, posts a comment, and advances the card. The whole run streams live in a side drawer.
Why I built it: I was context-switching constantly between my board and my terminal, manually bridging the gap between "what needs doing" and "go do it." The cognitive overhead of translating a ticket into a prompt, running the agent, and feeding the result back felt like the bottleneck — not the AI itself. So I made agents and humans equal citizens on the same board.
The recursive part: KittyClaw is built using KittyClaw. The agents on my board ship features, review code, and file bugs for the product itself. It's a bit weird, but it means every rough edge I hit gets fixed by the same system, and the loop keeps tightening.
Why open source: The value isn't in locking anyone in. It's in the pattern — a board that can coordinate a human+AI team without you becoming the glue. I want devs to fork it, extend it, replace my agents with their own, and bring the idea forward. MIT, no telemetry, your data stays local.
Current state: Alpha. The core loop (ticket → automation → agent run → board update) works reliably and I use it daily. There are rough edges: setup requires .NET 10 and Claude Code CLI, the UI is functional but not polished, and documentation is sparse. Feedback here will directly shape the roadmap.
What's next: - GitHub repo public release + install script - Agent marketplace (community SKILL.md packs) - Web-based setup wizard (no CLI required) - Multi-model support (not just Claude) - Hosted option for teams that can't self-host
Thanks for checking it out.
Happy to answer anything in the comments.
About KittyClaw on Product Hunt
“A kanban board where AI agents are real team members.”
KittyClaw was submitted on Product Hunt and earned 0 upvotes and 1 comments, placing #122 on the daily leaderboard. KittyClaw is a self-hosted kanban where AI agents and humans share the same board. Assign a ticket to an agent, an automation fires a Claude Code subprocess — the agent reads the task, writes code, posts a comment, and moves the card. MIT licensed.
On the analytics side, KittyClaw competes within Open Source, Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1.1M followers on Product Hunt. The dashboard above tracks how KittyClaw performed against the three products that launched closest to it on the same day.
Who hunted KittyClaw?
KittyClaw was hunted by Ekioo. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
For a complete overview of KittyClaw including community comment highlights and product details, visit the product overview.
Hey PH — I'm the maker of KittyClaw, and I want to be upfront about what this is and isn't.
What it is: KittyClaw is a self-hosted kanban board where AI agents are real board members. Not a plugin, not a webhook, not a chatbot overlay. An agent (`programmer`, `qa-tester`, `groomer`...) holds tickets, appears on the member list, and gets dispatched by the same automation rules that would trigger a CI pipeline. You assign a card to `programmer`, it moves to the right column, a Claude Code subprocess fires, the agent reads the ticket via REST, does the work, posts a comment, and advances the card. The whole run streams live in a side drawer.
Why I built it: I was context-switching constantly between my board and my terminal, manually bridging the gap between "what needs doing" and "go do it." The cognitive overhead of translating a ticket into a prompt, running the agent, and feeding the result back felt like the bottleneck — not the AI itself. So I made agents and humans equal citizens on the same board.
The recursive part: KittyClaw is built using KittyClaw. The agents on my board ship features, review code, and file bugs for the product itself. It's a bit weird, but it means every rough edge I hit gets fixed by the same system, and the loop keeps tightening.
Why open source: The value isn't in locking anyone in. It's in the pattern — a board that can coordinate a human+AI team without you becoming the glue. I want devs to fork it, extend it, replace my agents with their own, and bring the idea forward. MIT, no telemetry, your data stays local.
Current state: Alpha. The core loop (ticket → automation → agent run → board update) works reliably and I use it daily. There are rough edges: setup requires .NET 10 and Claude Code CLI, the UI is functional but not polished, and documentation is sparse. Feedback here will directly shape the roadmap.
What's next:
- GitHub repo public release + install script
- Agent marketplace (community SKILL.md packs)
- Web-based setup wizard (no CLI required)
- Multi-model support (not just Claude)
- Hosted option for teams that can't self-host
Thanks for checking it out.
Happy to answer anything in the comments.