This product was not featured by Product Hunt yet.
It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).

Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

AgentShield

Prompt injection detection API for AI agents

AgentShield is a prompt injection classifier that sits between untrusted input and your AI agent. One API call classifies any text — user messages, RAG documents, tool outputs — and returns a verdict before it reaches the model. Think of it as a WAF for LLMs. Why we built it: Johns Hopkins researchers hijacked Claude Code, Gemini CLI, and GitHub Copilot through prompt injection. The three biggest AI companies couldn't stop it. We built an external security layer that does.

Top comment

Hey Product Hunt! 👋 I'm Daniel, the builder behind AgentShield. The idea came from a simple observation: if you're deploying AI agents that process untrusted input — user messages, documents, tool responses — you need an external security boundary. The model can't protect itself from prompt injection, just like a web app can't be its own firewall. The tipping point was when Johns Hopkins researchers hijacked Claude Code, Gemini CLI, and GitHub Copilot through trivial prompt injection attacks. All three vendors paid bug bounties. None published advisories. I figured if the biggest AI companies can't solve this at the model level, there needs to be a dedicated layer. AgentShield classifies every input before it reaches your model: Direct injection ("ignore previous instructions") Indirect injection (malicious instructions hidden in documents or tool outputs) Social engineering (fake system messages, authority impersonation) Encoding tricks (base64, homoglyphs, invisible Unicode) The free tier gives you 100 requests/day — enough to try it in your pipeline. If you need to keep data on-premises, there's a self-hosted Docker image. I'd love feedback on what use cases matter most to you. What are your AI agents processing that worries you?

About AgentShield on Product Hunt

Prompt injection detection API for AI agents

AgentShield was submitted on Product Hunt and earned 0 upvotes and 1 comments, placing #65 on the daily leaderboard. AgentShield is a prompt injection classifier that sits between untrusted input and your AI agent. One API call classifies any text — user messages, RAG documents, tool outputs — and returns a verdict before it reaches the model. Think of it as a WAF for LLMs. Why we built it: Johns Hopkins researchers hijacked Claude Code, Gemini CLI, and GitHub Copilot through prompt injection. The three biggest AI companies couldn't stop it. We built an external security layer that does.

On the analytics side, AgentShield competes within Developer Tools, Artificial Intelligence, GitHub and Security — topics that collectively have 1M followers on Product Hunt. The dashboard above tracks how AgentShield performed against the three products that launched closest to it on the same day.

Who hunted AgentShield?

AgentShield was hunted by AgentShield. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

For a complete overview of AgentShield including community comment highlights and product details, visit the product overview.