This product was not featured by Product Hunt yet. It will not be visible on their landing page and won't be ranked (cannot win product of the day regardless of upvotes).
Product upvotes vs the next 3
Waiting for data. Loading
Product comments vs the next 3
Waiting for data. Loading
Product upvote speed vs the next 3
Waiting for data. Loading
Product upvotes and comments
Waiting for data. Loading
Product vs the next 3
Loading
Goderash
The audit layer for regulated AI agents
Goderash is the audit layer for regulated AI agents. One decorator wraps any tool call into a SHA-256 hash-chained event — and ships SOC 2, HIPAA, FFIEC, FINRA, and SEC 17a-4 evidence packs an auditor can verify themselves. Apache 2.0. We built this for an in-app financial AI assistant.
We built this because of a wall I kept hitting in production: AI agents that work perfectly in the demo, then die in compliance review.
The reason is simple. Every audit trail today is a logging trail. Logs answer "what happened." Auditors want something different — chain-of-custody, tamper-evidence, and the ability to replay history under alternate policies. LLM trace dumps don't survive that conversation.
So we built Goderash.
One decorator wraps any tool call, LLM call, or policy decision and writes a typed, SHA-256 hash-chained event into a per-tenant Postgres ledger. Mutate one byte → the chain breaks → /v1/verify catches it. The auditor verifies it themselves over a single HTTP call. No trust required.
What ships in v0.1.0: 🔗 Hash-chained event ledger (Postgres, append-only) 🛡️ Runtime guards — permission modes, velocity limits, fraud guards, biometric confirm 🔄 What-If projector — replay history under alternate policies 📦 5 evidence packs — SOC 2, HIPAA, FFIEC, FINRA, SEC Rule 17a-4 🧩 Adapters for LangGraph, OpenAI Assistants, Anthropic, Claude SDK, AutoGen, LangChain 🔓 Apache 2.0 — self-host free, or use our hosted control plane
Origin: We built this for an in-app financial AI assistant. To get past a Tier-1 risk-and-compliance committee, we had to build the whole stack. Goderash is that work, made framework-agnostic and open-sourced.
Traction: → 4 PyPI + 4 npm packages → 1,636 downloads in the first week → Production-tested in regulated banking
If you're building agents in fintech, banking, healthtech, insurance, or legal try it and tell us what breaks. I'll be in the comments all day.
Goderash was submitted on Product Hunt and earned 4 upvotes and 1 comments, placing #70 on the daily leaderboard. Goderash is the audit layer for regulated AI agents. One decorator wraps any tool call into a SHA-256 hash-chained event — and ships SOC 2, HIPAA, FFIEC, FINRA, and SEC 17a-4 evidence packs an auditor can verify themselves. Apache 2.0. We built this for an in-app financial AI assistant.
On the analytics side, Goderash competes within Open Source, SaaS, Artificial Intelligence and GitHub — topics that collectively have 620.1k followers on Product Hunt. The dashboard above tracks how Goderash performed against the three products that launched closest to it on the same day.
Who hunted Goderash?
Goderash was hunted by Atnabon Deressa. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Hey Product Hunt 👋
I'm Atnabon Deressa, founder of Goderash.
We built this because of a wall I kept hitting in production: AI agents that work perfectly in the demo, then die in compliance review.
The reason is simple. Every audit trail today is a logging trail. Logs answer "what happened." Auditors want something different — chain-of-custody, tamper-evidence, and the ability to replay history under alternate policies. LLM trace dumps don't survive that conversation.
So we built Goderash.
One decorator wraps any tool call, LLM call, or policy decision and writes a typed, SHA-256 hash-chained event into a per-tenant Postgres ledger. Mutate one byte → the chain breaks → /v1/verify catches it. The auditor verifies it themselves over a single HTTP call. No trust required.
What ships in v0.1.0:
🔗 Hash-chained event ledger (Postgres, append-only)
🛡️ Runtime guards — permission modes, velocity limits, fraud guards, biometric confirm
🔄 What-If projector — replay history under alternate policies
📦 5 evidence packs — SOC 2, HIPAA, FFIEC, FINRA, SEC Rule 17a-4
🧩 Adapters for LangGraph, OpenAI Assistants, Anthropic, Claude SDK, AutoGen, LangChain
🔓 Apache 2.0 — self-host free, or use our hosted control plane
Origin: We built this for an in-app financial AI assistant. To get past a Tier-1 risk-and-compliance committee, we had to build the whole stack. Goderash is that work, made framework-agnostic and open-sourced.
Traction:
→ 4 PyPI + 4 npm packages
→ 1,636 downloads in the first week
→ Production-tested in regulated banking
If you're building agents in fintech, banking, healthtech, insurance, or legal try it and tell us what breaks. I'll be in the comments all day.
🐙 GitHub: github.com/goderash/goderash
📚 Docs: ai.goderash.com
Built in Addis Ababa 🇪🇹