My Problem with MCP

Most MCP criticism focuses on the implementation: the security model, the transport layer, the protocol design. Shrivu Shankar's Everything Wrong with MCP covers this well. These are real problems, but they're fixable.

My concern is more fundamental. I think MCP draws the abstraction boundary in the wrong place, and in doing so it disincentivises the thing we actually need: services making their primary APIs convenient and universally accessible (including to agents), rather than an optional extra.

Three concerns, one layer

When an agent needs to work with a service, there are three separate concerns:

API access. The ability to call the service programmatically, with the same coverage as the web UI. Give an agent the GraphQL schema, the OpenAPI spec, or just a good set of man pages, and it can reason about the full surface area. It can compose operations and do things the API designer didn't specifically anticipate. It's working with the service, not with someone's curated view of it.

Efficient tooling. A CLI or library that wraps the API, keeping the full surface out of the agent's context window while still providing access to all of it. The tool handles HTTP calls, auth, and serialisation; the agent just invokes commands and reads the output. This is how most effective agent-tool integrations already work.

Domain knowledge. The patterns, gotchas, and multi-step workflows for using a service well. This belongs in something like an agent skill: a reusable, shareable package of knowledge that an agent can load when relevant and discard when it's done.

MCP bundles all three into one layer. An MCP server is simultaneously the API access, the tooling, and the domain knowledge, all loaded into the agent's context as tool definitions. When the MCP server only exposes a subset of the underlying API (which is the common case), the agent is limited to that subset. The tool definitions are the menu, and the agent can't order off-menu.

Docker's blog post on MCP misconceptions argues that MCP isn't meant to replace APIs, it's about "preconditions, success criteria, and affordances" for agent-tool interaction. Fair enough. But that's domain knowledge, and it belongs in skills, not tool definitions permanently occupying context.

If the API is good enough to back the MCP server, it's good enough for an agent to use directly. The MCP layer just adds indirection, latency, and another thing to maintain. And by offering a "good enough" integration path, it reduces the pressure on services to make their primary APIs accessible in the first place.

The Lattice problem

Take Lattice. It's a people management platform: goals, feedback, 1:1s, performance reviews. If your company uses Lattice, these are the workflows you interact with as an employee.

Lattice has a public API. But to access it, you first need to email request-api-access@lattice.com and justify why you'd like programmatic access to your own data. When you do get access, you'll find an API oriented toward HR administration: syncing employee records with other HRIS systems, managing organisational data. The things you actually use Lattice for as an employee (your goals, your feedback, your 1:1 notes) are not meaningfully exposed.

This seems like a perfect use case for MCP. An agent-friendly interface to Lattice that lets you interact with your own data: update your goals, draft feedback, prepare for 1:1s. The public API doesn't serve you, so surely MCP could fill the gap?

A well-built MCP server could absolutely serve this need. But it would mean Lattice building and maintaining an entirely new interface, with all the context inefficiency and other downsides MCP brings. And the thing is, Lattice already has a full-featured API that does all of this. It's just not the public one.

Lattice has a perfectly serviceable GraphQL API that powers its web client. Every action you can take in the UI (every goal you create, every piece of feedback you write) is backed by this API. It's complete, well-tested, and actively maintained because the product depends on it. But it's private and undocumented.

That API is right there. Open your browser's network panel, click around in Lattice, and you can see every GraphQL query and mutation the application makes. You could introspect the schema, understand the operations, and build an agent that uses it. You shouldn't do this (it would violate their terms of service) but it makes the point: a complete, capable API already exists. The only thing between it and agentic consumption is a policy decision.

The better answer is for services like Lattice to just open up their existing APIs, or a useful subset of them, for employee-level programmatic access. An agent that can read a GraphQL schema can reason about what operations are available and how to compose them. The API is already built. The hard work is done.

MCP doesn't push in this direction. It incentivises Lattice to build a whole new, separate, purpose-built MCP server. And given the pattern of their existing public API, that MCP server would likely expose the same HR-admin-oriented subset rather than the employee workflows people actually want to automate. We end up with yet another limited interface, more maintenance burden, and the original capable API still locked behind a wall.

Anyone who lived through SOAP will recognise the pattern: a universal integration layer that was going to solve everything, and instead gave us a whole new category of problems.

The auth problem is already solved

The most common defence of MCP is that it standardises authentication. This is genuinely useful. Most SaaS products make it easy to log in through a browser and hard to authenticate programmatically as yourself, and MCP provides a consistent auth flow across services.

But the same solution is available to regular APIs. OAuth 2.0 with PKCE already provides a standard flow for exactly this: authenticate as yourself in the browser, get an access token, use it for API calls. It's the same mechanism that powers most web UIs already. The infrastructure exists. Services just choose not to expose it for direct API use.

MCP shouldn't get credit for solving a problem that's already solved at the protocol level. The issue was never a lack of auth standards. It's that services don't prioritise giving users programmatic access to their own data.

API-first in the agentic world

"API-first" has been a design principle for years, but it gains new salience in the agentic world. When your users include AI agents, the gap between "what you can do in the UI" and "what you can do via the API" becomes a much bigger problem. Every feature that only exists in the UI is a feature that agents can't use. Tool vendors should be asking themselves: does our API expose the same functionality as our UI? If not, why not?

And this matters more than people realise. I'd go as far as calling full API access an accessibility concern. Personally, I struggle to comprehend and make sense of large amounts of information presented through a web UI. Now that I know what I can do when I have an agent working with me (synthesising, filtering, connecting things across services) going back to clicking around in a browser feels like a real limitation. Giving users programmatic access to their own data, so they can work with it through whatever tools suit them, isn't a nice-to-have. For some of us it's the difference between being able to effectively consume the information or not.

Or try navigating your company's HR policies locked in SharePoint with a security policy that prevents download or copy and paste. An agent could help you find what you need and make sense of it, if the API let you get the content to the agent in the first place.

What I'd rather see

MCP isn't useless. If a service has no programmatic interface at all, it gives you a standardised way to create one. And for simple integrations where you don't care about context efficiency, a pre-packaged MCP server is convenient. I use a few myself.

But the energy going into building MCP servers for services that already have capable APIs is misdirected. What I'd rather see:

Open up your APIs. If your UI can do it, your API should be able to do it. Provide OAuth flows so users can authenticate programmatically as themselves. Make API access a first-class part of the product, not an afterthought gated behind an email to your sales team.

Build skills, not MCP servers. The knowledge of how to use a service well is valuable. Package it as reusable agent skills that encode the patterns, workflows, and gotchas. Skills are lighter weight, context-efficient, and composable in ways that MCP tool definitions are not. Agents are already good at reading an OpenAPI spec, a GraphQL schema, or a CLI's help output and working out how to use it. They just need access, and where appropriate, domain knowledge via a skill.

The end state isn't every service implementing a bespoke MCP server. It's good APIs with good documentation, and skills that encode the domain knowledge for agents that need it.

MCP is a detour from getting there.

I'd also echo the recommendations in Eric Holmes' MCP is dead, long live the CLI.