Picture this: you've built a solid AI agent. It handles customer inquiries, looks up orders, escalates complex issues. Then your CRM vendor releases a new API. Or your team decides to switch from OpenAI to Claude. Or you want to plug in a real-time knowledge base that didn't exist six months ago.
Suddenly you're back in the integration trenches. Rewriting connectors. Debugging auth flows. Reconciling incompatible data formats between systems that have never heard of each other. The agent itself is great — it's the plumbing that's killing you.
This is the integration problem that has quietly strangled enterprise AI adoption for the last several years. And it's exactly the problem that the Model Context Protocol (MCP) was designed to fix.
The Integration Tax Every AI Team Was Paying
Before MCP, connecting an AI agent to external tools required custom glue for every combination. OpenAI had function calling. ChatGPT had its plugin system. Anthropic had its own tool use spec. Every framework — LangChain, AutoGen, CrewAI — had slightly different conventions for how agents described and invoked tools.
Want to connect your agent to Salesforce, Jira, GitHub, and a custom internal API? You were writing four different connectors. Then if you switched LLM providers, you were potentially rewriting all four in a different format. And when one of those vendors updated their API? You found out in production, the hard way.
The ChatGPT plugin era is a good case study here. When OpenAI launched plugins in early 2023, it felt like the future: agents that could browse the web, query APIs, run code. But the architecture was fundamentally closed. Plugins only worked inside ChatGPT. Building a plugin didn't help you if you were using Claude or building your own agent. And when OpenAI decided to deprecate plugins in favor of GPTs in 2024, all that integration work became technical debt overnight.
Proprietary plugin systems are structurally limited: they optimize for the platform owner's ecosystem, not for interoperability. You get integration speed on day one and lock-in on day one thousand.
“MCP gives a single key that can unlock many doors. New MCP servers can be added without changing the client.”
What MCP Actually Is
Anthropic introduced the Model Context Protocol in November 2024 as an open standard for connecting AI assistants to the outside world — data systems, APIs, file systems, databases, business tools — all through a common language.
The simplest analogy: MCP is USB-C for AI. Before USB-C, you needed a different cable for every device. A barrel plug for your laptop, a micro-USB for the Android phone, Lightning for the iPhone, a proprietary connector for the monitor. Every device solved the same problem independently, incompatibly. USB-C said: one connector, one protocol, works everywhere. MCP does the same thing for AI integrations.
Here's how the architecture actually works. MCP has three main primitives:
Tools are actions an agent can take — calling an API, querying a database, triggering a workflow. Each tool has a name, a description, and an input schema. The agent decides when to call a tool based on that description; you don't need to hardcode the decision logic.
Resources are data sources the agent can read — files, documents, database records, structured outputs from other services. Resources are read-only, which keeps the security surface smaller.
Prompts are reusable templates that encapsulate best practices for interacting with a particular service. Think of them as pre-packaged context that helps the agent do the right thing when working with a specific system.
The protocol itself runs over JSON-RPC 2.0. An MCP server exposes tools and resources; an MCP client (your AI agent or the framework powering it) calls into them. The connection is persistent and bidirectional — much cleaner than the stateless request-response pattern of traditional function calling.
MCP Server Downloads
Available MCP Servers
Major Platform Support
MCP vs. Traditional Function Calling
If you've been using function calling in OpenAI or Anthropic's APIs, MCP might sound like a rebrand. It isn't.
Function calling requires you to declare all possible functions upfront, at design time, in the prompt. The model can only call functions it was told about at the start of the conversation. This works well for simple, predictable scenarios. It falls apart when you want agents to discover and use new capabilities dynamically, or when you want the same agent to work across multiple platforms.
MCP changes the model. Capabilities aren't declared in the prompt — they're discovered at runtime through a standardized handshake. An MCP server advertises what it can do, and the client negotiates which capabilities to use. This means:
- Your agent can connect to an MCP server it's never seen before and immediately understand what it offers
- New tools can be added to an MCP server without touching the agent code
- The same agent codebase works with any MCP-compatible tool, regardless of which LLM is powering it
The distinction matters most at scale. A function-calling system with twenty integrations requires twenty places to update when something changes. An MCP-based system requires one: the server that owns that integration.
Why Every Major AI Platform Adopted It
Here's the thing that tells you MCP won: OpenAI adopted it.
Anthropic announced MCP in November 2024. By March 2025 — less than five months later — OpenAI officially integrated it across their products, including the ChatGPT desktop app. Google DeepMind's Demis Hassabis confirmed Gemini support the following month. Microsoft announced GitHub and Windows 11 integration at Build 2025. AWS joined the governance steering committee.
These are direct competitors. They don't agree on model architectures, pricing, or safety philosophies. The fact that all of them adopted a standard originally created by one of their main rivals is about as strong a signal as the industry can send. It's the equivalent of Android and iOS agreeing to use the same charging standard — except they actually did it, and they did it in under a year.
By December 2025, Anthropic donated MCP to the Linux Foundation, where it now lives as the Agentic AI Foundation (AAIF) alongside contributions from Block and OpenAI. Once a protocol moves to neutral governance, it's a standard. Not a vendor's API. A standard.
The adoption curve reinforces this. In November 2024, there were roughly 100,000 total MCP server downloads. By April 2025, that number had crossed 8 million. There are now over 10,000 public MCP servers available — covering everything from GitHub, Slack, and Google Drive to Salesforce, PostgreSQL, and custom enterprise systems.
The Fragmentation Problem MCP Solves
To appreciate what MCP changes, you need to understand how bad the fragmentation was.
A mid-sized enterprise building AI agents in 2023 faced something like this: their agent needed to query a CRM, read from a knowledge base, create tickets in a project management tool, and send notifications over a messaging platform. That's four integrations. Each platform had its own authentication mechanism, data schema, error format, and rate limiting behavior. Each LLM provider had a slightly different way of expressing tool definitions. And when they wanted to add a second agent for a different use case, they rebuilt much of this from scratch.
The engineering overhead wasn't the worst part. The worst part was the brittleness. Every integration was a potential point of failure. Every API update was a surprise breaking change. Every new LLM model potentially required connector rewrites. Teams were spending a disproportionate share of engineering time on integration maintenance rather than making the agents actually smarter.
MCP collapses this complexity. Build an MCP server for your CRM once. That server is now usable by any MCP-compatible AI client — Claude, ChatGPT, your custom LangGraph agent, whatever you're building. Change the underlying CRM API? Update the server, not the agents. Switch LLM providers? The agents keep working because the tool interface hasn't changed.
- Define your integration once: build an MCP server, not a one-off connector
- Any MCP-compatible agent can use it without modification
- Authentication, authorization, and error handling live in the server — not scattered across agent code
- Switch LLM providers without rewriting tool connectors
- Add new capabilities by adding new MCP servers — agents discover them automatically
- Community MCP servers cover most common integrations already
What MCP Means for Agent Testing
Here's where things get interesting from a quality engineering perspective.
The shift to MCP doesn't eliminate the need to test your agents — if anything, it raises the stakes. When your agent can dynamically discover and call any MCP server, the surface area for testing expands. You're no longer just testing whether the agent gives the right answer given a fixed set of tools. You're testing whether it chooses the right tools from a potentially large set, whether it handles tool failures gracefully, whether the tool calls it makes are safe and appropriate for the given context.
This is why scenario-based testing becomes even more critical in an MCP-first architecture. You need to simulate realistic conversations where the agent has access to multiple tools and verify that it uses them correctly, in the right order, with the right inputs. A customer asking about a billing discrepancy shouldn't trigger an agent to call an account-deletion tool — even if that tool technically exists in the MCP server and the agent technically has permission.
Prompt management becomes more important, not less. When tool capabilities are discovered dynamically, the agent's instructions are your last line of guardrails. A prompt that worked fine with three hardcoded functions can behave unpredictably when the agent suddenly has thirty tools available and no clear rules about which ones are appropriate in a given context.
“2026 is shaping up to be the year for enterprise-ready MCP adoption, as organizations move from experimentation to production-grade deployments with proper governance and testing.”
How Chanl Built on MCP
When we designed how Chanl agents connect to external data and tools, the choice to build on MCP was straightforward. The alternative — a proprietary integration layer — would have meant asking customers to learn our specific connector format, creating lock-in for them and a growing maintenance burden for us.
Instead, Chanl agents communicate with the outside world through MCP. This means any MCP server your team has built — or any of the thousands of community servers already available — can be connected to your agent without custom integration work. Your Salesforce data, your knowledge base, your internal APIs: if there's an MCP server for it, your agent can use it.
When you configure a new agent on Chanl, you're selecting MCP servers rather than writing integration code. The tools layer handles the MCP client logic — the agent discovers what capabilities are available and uses them.
This also makes testing much cleaner. Because every tool call goes through a standard interface, Chanl can capture and inspect every tool invocation during scenario test runs. You can see exactly which tools the agent called, what inputs it passed, and whether the outputs were handled correctly. That kind of observability is much harder when tool integrations are scattered across custom connector code.
Security and Governance: What You Need to Know
MCP's early months weren't without growing pains. Security researchers in April 2025 identified several potential issues: prompt injection through tool descriptions, over-permissioned tool scopes that could allow data exfiltration, and lookalike tools that could silently substitute for trusted ones.
These were real concerns, and the community took them seriously. By late 2025, the MCP specification incorporated CIMD (Client-Initiated Metadata Discovery), which significantly improved enterprise security posture. OAuth 2.0 became the standard authentication mechanism, and tooling for auditing MCP server behavior became widely available.
For enterprise deployments, the practical guidance is straightforward:
Scope permissions tightly. Each MCP server should have the minimum permissions needed for its stated purpose. An order-lookup server doesn't need write access to customer records. Review what each server claims it can do before connecting it to a production agent.
Audit tool call logs. Because MCP standardizes the interface, you can build consistent logging across all tool calls regardless of which server they went to. This is a major compliance advantage over the patchwork of custom connectors it replaces.
Test with adversarial inputs. Prompt injection through tool outputs is a real attack vector. Your scenario testing should include cases where external data returned by a tool contains instructions designed to manipulate the agent.
Stick to well-governed servers. The MCP ecosystem has community-vetted servers, Anthropic-maintained reference implementations, and increasingly, enterprise-audited versions from vendors like Salesforce, Atlassian, and GitHub. These are meaningfully safer than a random server you found on GitHub last week.
A Practical Timeline for MCP Adoption
November 2024
MCP announced by Anthropic
Open standard released with SDKs for Python, TypeScript, C#, and Java. Initial ecosystem of a few dozen servers.
March 2025
OpenAI adopts MCP
ChatGPT desktop app integrates MCP client support. The standard becomes platform-agnostic.
April 2025
Google and Microsoft join
Gemini confirms support, GitHub and Windows 11 preview MCP integration at Build 2025. 8M+ downloads.
Late 2025
Security hardening
CIMD incorporated into spec. Enterprise security features formalized. OAuth 2.0 standardized.
December 2025
Linux Foundation governance
Anthropic donates MCP to Agentic AI Foundation. Neutral governance removes vendor lock-in concerns.
2026 and beyond
Enterprise adoption wave
Production deployments at scale. Multi-agent MCP orchestration. 10,000+ public servers. Standard expected in most enterprise AI tooling.
What to Build Now
If you're operating AI agents in production today, here's how to think about MCP practically.
Already have agents in production? You don't need to rebuild everything. Identify your highest-maintenance integrations — the ones that break most often when upstream APIs change — and migrate those to MCP servers first. The value shows up quickly in reduced maintenance overhead.
Starting fresh? Design around MCP from day one. Instead of writing custom tool definitions into your prompts, build or adopt MCP servers for each capability you need. Your agent code stays clean; the integration details live in the servers.
Evaluating AI platforms? MCP support is now a legitimate evaluation criterion. A platform that doesn't support MCP is betting that its proprietary integration format will outlast the ecosystem — which is not a good bet given where adoption is heading.
Building toward multi-agent systems? MCP's role in multi-agent orchestration is still evolving, but the direction is clear. Agents coordinating with other agents through standardized interfaces is exactly where the spec is heading. Getting comfortable with MCP now means you're on the right foundation when that becomes mainstream.
If you're also tracking call analytics or running scorecard evaluations on your agents, the observability story gets significantly better on MCP — because every tool call goes through a standard interface that's easy to log, inspect, and audit.
See how Chanl agents connect to your tools via MCP
Connect any MCP server to your agents without custom integration code. Explore Chanl's MCP integration layer and start building on a protocol that won't lock you in.
Explore Chanl MCPThe Bigger Picture
MCP matters beyond the immediate engineering convenience. It represents a shift in how the AI industry is choosing to build.
The era of proprietary, platform-specific AI integrations is ending. Not because any one company decided it should — but because the fragmentation was genuinely hurting everyone. It was slowing down enterprises trying to deploy agents. It was exhausting developers who were reinventing integration wheels for every new platform. It was creating a world where your investment in AI tooling was perpetually at risk from vendor decisions you couldn't control.
Open protocols win when the pain of fragmentation exceeds the competitive advantage of lock-in. That threshold was crossed sometime in 2025, and MCP was the standard ready to fill the gap.
For teams building AI agents, the implication is straightforward: integrations are no longer a competitive moat, and they're no longer an engineering sink. They're a solved problem — as long as you build on MCP.
What remains a competitive advantage is everything on top of that: how well your agents are prompted, how rigorously they're tested, how closely their behavior is monitored in production, and how quickly your team can iterate when they find something that needs to improve.
The plumbing is standardized. Now go build something good with it.
- Introducing the Model Context Protocol — Anthropic
- Model Context Protocol — Wikipedia — Wikipedia
- Why the Model Context Protocol Won — The New Stack
- A Year of MCP: From Internal Experiment to Industry Standard — Pento
- One Year of MCP: November 2025 Spec Release — Model Context Protocol Blog
- Donating the Model Context Protocol and establishing the Agentic AI Foundation — Anthropic
- Linux Foundation Announces the Formation of the Agentic AI Foundation (AAIF) — Linux Foundation
- MCP Model Context Protocol: Complete Guide for Enterprise Adoption 2025 — Deepak Gupta
- Model Context Protocol (MCP): A Comprehensive Introduction for Developers — Stytch
- Beyond Plugins: How the Model Context Protocol Is Changing ChatGPT — Dataslayer
- Model Context Protocol Comparison: MCP vs Function Calling, Plugins, APIs — Ikangai
- AI Agents vs. Model Context Protocol: Choosing the Best Approach — Medium
- How Model Context Protocol Opens AI's Second Act — iManage
- The Model Context Protocol: How MCP Standardisation Enables Production AI Agent Deployment — SoftwareSeni
- What Is MCP, and Why Is Everyone Suddenly Talking About It? — Hugging Face
- 2026: The Year for Enterprise-Ready MCP Adoption — CData
- Why Model Context Protocol Is Suddenly on Every Executive Agenda — CIO
- What Is the Model Context Protocol? How Will It Enable the Future of Agentic AI? — Equinix
- Code Execution with MCP: Building More Efficient AI Agents — Anthropic Engineering
- Model Context Protocol for Enterprise AI Integration — Strategy.com
Dean Grover
Co-founder
Building the platform for AI agents at Chanl — tools, testing, and observability for customer experience.
Get AI Agent Insights
Subscribe to our newsletter for weekly tips and best practices.



