Perf is an AI correction layer for teams shipping AI products. It sits between your app and your models, checks every output against your rules, and fixes or blocks issues before users see them. We’re opening a limited closed beta for teams dealing with hallucinations, incorrectly parsed JSON, policy violations, or unreliable AI responses.
AI products have improved a lot over the past two years, but they still make mistakes. They hallucinate, return incorrect data, break expected formats, or say things that do not match a company’s rules.
Most teams deal with this by flagging the issue for later, retrying the request, or blocking the output.
We wondered: what if AI systems could be corrected before the user ever saw the mistake?
Perf is a verification and correction layer that sits between your AI models and your app. It checks AI outputs against your rules, catches problems, and then corrects, blocks, or escalates them before they reach the customer.
We’re launching today as a closed beta. The product is designed for teams building AI products where accuracy, reliability, and trust matter.
Product Hunt users can apply for early beta access on the site. I’ll personally review the first batch of requests.
Would love feedback from anyone building AI products, what mistakes would you want caught before they reach users?
Pretty interesting product, thank you for sharing!
I had to build a few systems like this and I'm curious how you're solving for the initial response times if the outputs needs to be classified/validated by another LLM before being sent to the user? Does it support streaming?
This makes a lot of sense, adding a verification layer feels like a necessary step as AI moves into production.
Feels like the tricky part is defining “correct” reliably across different use cases, not just catching obvious errors.
About Perf on Product Hunt
“Verify and correct AI outputs before users see them”
Perf launched on Product Hunt on May 5th, 2026 and earned 56 upvotes and 3 comments, placing #72 on the daily leaderboard. Perf is an AI correction layer for teams shipping AI products. It sits between your app and your models, checks every output against your rules, and fixes or blocks issues before users see them. We’re opening a limited closed beta for teams dealing with hallucinations, incorrectly parsed JSON, policy violations, or unreliable AI responses.
Perf was featured in Developer Tools (512.4k followers), Artificial Intelligence (468.5k followers) and Pitch NYC (2 followers) on Product Hunt. Together, these topics include over 161.7k products, making this a competitive space to launch in.
Who hunted Perf?
Perf was hunted by Rajiv Ayyangar. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Perf stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt 👋
I’m Shreyas, founder of Perf.
AI products have improved a lot over the past two years, but they still make mistakes. They hallucinate, return incorrect data, break expected formats, or say things that do not match a company’s rules.
Most teams deal with this by flagging the issue for later, retrying the request, or blocking the output.
We wondered: what if AI systems could be corrected before the user ever saw the mistake?
That’s what we’re building with Perf.
Perf is a verification and correction layer that sits between your AI models and your app. It checks AI outputs against your rules, catches problems, and then corrects, blocks, or escalates them before they reach the customer.
We’re launching today as a closed beta. The product is designed for teams building AI products where accuracy, reliability, and trust matter.
Product Hunt users can apply for early beta access on the site. I’ll personally review the first batch of requests.
Would love feedback from anyone building AI products, what mistakes would you want caught before they reach users?