Don't let Claude and Codex roam free on your customers' SaaS data. Apideck MCP gives AI agents permissioned access to 200+ apps, including Accounting, CRM, HRIS, ATS, and more, through a single endpoint. Scoped read/write permissions and field-level redaction are enforced at the MCP layer. Works with any MCP client (Claude, Cursor, Codex, Windsurf, LangChain, Vercel AI SDK) and agent runtimes like OpenClaw and Hermes. One MCP server. 200+ apps. Production-ready.
Apideck is a Unified API. One integration gives developers access to 20+ accounting systems, 20+ HRIS platforms, file storage, and more. That means our MCP server doesn't expose "QuickBooks invoices" it exposes "accounting invoices," and the connector fires based on what the user has authorized. Our tool surface is 229 operations and growing.
The harder problem was tokens. Static mode at 229 tools costs 25-40K tokens before an agent reads a single message. We solved it with dynamic tool discovery: 4 meta-tools at startup (~1,300 tokens), and agents discover what they need on demand. It means adding ecommerce and CRM won't cost a single extra token at initialization.
Happy to answer questions about the OpenAPI-to-MCP generation pipeline, the dynamic discovery architecture, or why we picked Vercel over Cloudflare Workers.
Permissioned access is the key piece most MCP servers are missing right now. Everyone's building MCP integrations but nobody's thinking about what happens when an AI agent has unrestricted write access to a customer's CRM. The field-level redaction layer is smart - how granular can you get with the permission scoping?
200+ app integrations through one clean protocol for agents is super useful. Been thinking about this problem for a side project and doing it manually is a pain. Does it support custom endpoints or just the 200 built-in ones?
MCP as connective tissue for agents is still underrated. What Apideck does here — normalized, pre-built access to 200+ apps without custom integration per tool — is exactly the boring-but-critical infrastructure that makes the rest of the agent stack actually work in production. My question is around multi-tenant setups: if I'm building an agent that operates on behalf of different users, does Apideck handle per-user auth token isolation, or does that fall on the developer?
The token optimization here is genuinely clever. Most MCP servers just dump 200+ tools at startup and burn 40K tokens before an agent does anything useful. Reducing that to 4 meta-tools with on-demand discovery feels like the difference between a production-ready agent and a demo toy. One question: how are you handling rate limits during dynamic discovery - does each discovery call count against integration/API quotas, or is there some caching layer involved?
MCP as the integration layer for agents is an underrated unlock and most agent demos fall apart the moment they need real enterprise data. Curious whether the normalized data models handle write operations too, or just reads? An agent that can update a CRM record or approve a payroll entry autonomously would be a different category of useful
We only recently solved this problem in an Italian financial project. It’s a pity I didn’t know about you back then - we spent a lot of time on this stage.
Really like this direction. Tool access for agents sounds simple until you’re juggling huge tool surfaces, token limits and permissions. The dynamic discovery part is clever. Wonder how performance looks when workflows get long and agents keep discovering more tools on the go.
The data normalization layer is where unified API platforms either win or quietly accumulate debt. Connectivity to 200+ platforms is the easy part — the hard, irreversible decisions are the data model choices: how do you reconcile QuickBooks' chart of accounts structure with NetSuite's multi-subsidiary model or DATEV's tax-first schema into one object? Once customers are in production on your model, you can't break it. Curious how Apideck handles model versioning and whether breaking changes in upstream connectors surface as silent data drift or noisy failures.
Congrats on the launch!
The dynamic tool discovery part is interesting, especially if it keeps the token cost down. For accounting/CRM write actions, do you expose enough context for audit/review before the agent actually writes? That feels like the scary part with MCP + business data.
Oh, love this idea - used to have a lot of problems with comparing MRR in CRM & accounting systems. If I could have data in one place back then, it'd save my hours and a lot of frustration.
MCP server hitting 200+ apps is the kind of leverage I keep wishing existed for one-off automation. Quick question: how do you handle write actions that are not idempotent across the underlying APIs (e.g. Salesforce vs HubSpot create-contact dedupe)? Do agents see a unified shape or each provider's quirks?
Super excited about this important milestone. Cannot wait to see what our customers are going to build with this. 🚀
Co-maker here 👋
Small thing worth mentioning: every tool call is instrumented through PostHog, with `waitUntil`-flushed batches so events survive Vercel's serverless lifecycle.
Which tools agents actually call out of 330, latency per operation, error rates by scope, all of it feeds back into what we prioritize next. That includes workflow tools like `apideck-month-end-close-check` (accounting) that fan out 4 reports in parallel behind one MCP call, analytics tell us when composition above the protocol is actually paying off versus when agents would rather chain the underlying tools themselves.
Hard to build for agents without seeing how they use the surface.
About Apideck MCP Server on Product Hunt
“Give AI agents access to real-time data across 200+ apps”
Apideck MCP Server launched on Product Hunt on May 13th, 2026 and earned 160 upvotes and 32 comments, placing #7 on the daily leaderboard. Don't let Claude and Codex roam free on your customers' SaaS data. Apideck MCP gives AI agents permissioned access to 200+ apps, including Accounting, CRM, HRIS, ATS, and more, through a single endpoint. Scoped read/write permissions and field-level redaction are enforced at the MCP layer. Works with any MCP client (Claude, Cursor, Codex, Windsurf, LangChain, Vercel AI SDK) and agent runtimes like OpenClaw and Hermes. One MCP server. 200+ apps. Production-ready.
Apideck MCP Server was featured in API (98.1k followers), Open Source (68.4k followers), Developer Tools (512.4k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 111k products, making this a competitive space to launch in.
Who hunted Apideck MCP Server?
Apideck MCP Server was hunted by fmerian. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Apideck MCP Server stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt 👋
We shipped something different with this one.
Apideck is a Unified API. One integration gives developers access to 20+ accounting systems, 20+ HRIS platforms, file storage, and more. That means our MCP server doesn't expose "QuickBooks invoices" it exposes "accounting invoices," and the connector fires based on what the user has authorized. Our tool surface is 229 operations and growing.
The harder problem was tokens. Static mode at 229 tools costs 25-40K tokens before an agent reads a single message. We solved it with dynamic tool discovery: 4 meta-tools at startup (~1,300 tokens), and agents discover what they need on demand. It means adding ecommerce and CRM won't cost a single extra token at initialization.
The server is live at mcp.apideck.dev/mcp. Code is open source at github.com/apideck-libraries/mcp. Full write-up on the stack, hosting tradeoffs, and the analytics debugging is on our blog.
Happy to answer questions about the OpenAPI-to-MCP generation pipeline, the dynamic discovery architecture, or why we picked Vercel over Cloudflare Workers.