I'll be honest: when I first saw "MCP" everywhere in November 2024, I assumed it was another overhyped acronym that would be dead by February. I've been watching tech hype cycles for twenty years, and most of them are just noise.
I was wrong about this one.
You've probably seen MCP mentioned in tech news lately. Maybe your colleague dropped it in a meeting, or you spotted it in a product update email. Every explanation you've found either assumes you're a developer or drowns you in jargon.
Let's fix that.
MCP stands for Model Context Protocol. It's an open standard that lets AI assistants like Claude and ChatGPT actually connect to your tools and data, instead of just talking about them. Think of it as the difference between an assistant who can describe your calendar and one who can actually read it.
Here's the thing: in less than a year, OpenAI, Google, and Microsoft all adopted this protocol that Anthropic created. That doesn't happen by accident. When competitors agree on a standard, it's usually because the alternative is chaos.
The Problem MCP Was Built to Solve
Before MCP, connecting AI to different tools was a disaster. And I mean a genuine, pull-your-hair-out disaster.
If you wanted ChatGPT to access your Google Drive, someone had to build a custom integration. Want it to also work with Slack? That's another custom integration. GitHub? Another one. I watched our team spend three weeks building a custom integration for one client, knowing it would be obsolete the moment they wanted to use a different AI tool.
This created what developers call the "N times M problem". If you've got N different AI applications and M different data sources, you need N times M separate integrations. Every new AI tool means building connections to everything again. Every new data source means updating every AI tool.
Frankly, it was exhausting.
Anthropic's official announcement described it bluntly: "Even the most sophisticated models are constrained by their isolation from data, trapped behind information silos and legacy systems."
MCP transforms that N times M nightmare into a simpler equation: M plus N. Each AI tool and each data source only needs to implement MCP once. Then they all work together.
Why the "USB-C for AI" Analogy Actually Makes Sense
I've got a drawer in my office. You probably have one too. It's full of cables that only work with one specific device that I threw away three years ago.
Remember when every phone had a different charger? Your laptop needed one cable, your tablet another, your phone a third. I once counted seventeen different charging cables in my house. Seventeen! Now USB-C connects almost everything. It's not perfect, but it's infinitely better than the chaos we had before.
MCP does the same thing for AI. Instead of needing a different "cable" between every AI assistant and every data source, you've got one universal connector. Build an MCP connection once, and it works with any AI that supports the protocol.
This isn't just convenience. It's what makes AI assistants genuinely useful rather than just impressive demo material.
Why Every Major AI Company Said "Yes"
Here's where it gets interesting. Anthropic released MCP on 25 November 2024. Within months, their competitors had signed on.
OpenAI joined in March 2025. Sam Altman didn't mince words: "People love MCP and we are excited to add support across our products." They've rolled it out to the Agents SDK, and ChatGPT desktop support is coming.
Google followed in April 2025. Demis Hassabis, CEO of Google DeepMind, announced on X: "MCP is a good protocol and it's rapidly becoming an open standard for the AI agentic era. We're excited to announce that we'll be supporting it for our Gemini models and SDK."
Microsoft went all in. They've integrated MCP across Semantic Kernel, Azure OpenAI, Dynamics 365, and Copilot Studio. At Build 2025, they announced MCP servers for their entire ERP suite.
Here's my honest take: when companies that compete this fiercely all agree on the same standard, something unusual is happening. These aren't companies that play nice with each other. They've essentially decided that fighting over connection protocols isn't worth it when they could be competing on the AI itself. That's rare. And it tells me MCP isn't going away anytime soon.
Editor's Note (December 2025): Just days before this article's publication, something remarkable happened. On 9 December 2025, the Linux Foundation announced the Agentic AI Foundation (AAIF). Anthropic donated MCP to this new foundation, with OpenAI, Google, Microsoft, AWS, Bloomberg, Cloudflare, and Block as founding members. The three founding projects? MCP, Block's Goose, and OpenAI's AGENTS.md. This isn't just adoption anymore. It's institutionalisation. The consensus I described above just became permanent infrastructure.
What MCP Actually Does for You Today
Let's get concrete. What can you actually do with MCP right now?
Claude can read and write your files. Not just talk about files, but actually access them. Ask Claude to "save this as a document on my desktop" and it happens. Ask "what's in my downloads folder" and you get a real answer.
I'll admit, the first time I asked Claude to save a file and it actually did? I just sat there for a moment. Twenty years of computers training me to expect "I can't do that" and suddenly it could. It felt like the future had arrived quietly while I wasn't paying attention.
AI assistants can access your calendar and notes. With the right MCP servers installed, Claude or ChatGPT can check your schedule, create reminders, or search through your Notion workspace.
Developers get AI that understands their codebase. Tools like Cursor and Cody use MCP to give AI assistants genuine access to project files, git history, and development environments.
The difference between "AI that talks about tasks" and "AI that does tasks" is exactly what MCP enables. It's the reason Claude Code can actually run commands and modify files rather than just suggesting what you should type.
Real Results: What Block Discovered
Block (the company behind Square) partnered with Anthropic early on. They've rolled out MCP across their engineering organisation, and the results are striking.
According to their case study, most employees report saving 50 to 75 percent of their time on common tasks. Work that once took days now gets completed in hours. Their MCP-powered tool, Goose, is used by thousands of Block employees daily.
I'll be honest: I was sceptical of those numbers. Productivity claims in tech are usually inflated by about 300 percent in my experience. But Block's methodology is solid, and they're specific about what tasks improved. This isn't "everything is magically better". It's "these specific workflows that used to be tedious are now fast".
Engineers use it for migrating legacy code, generating unit tests, and streamlining dependency upgrades. Data teams query internal systems, summarise datasets, and automate reporting. The key integrations? Snowflake for data, GitHub and Jira for development workflows, Slack and Google Drive for communication.
Block's CTO put it simply: "Remove the burden of the mechanical so people can focus on the creative."
Bloomberg's also on board. They've publicly announced their MCP alignment, noting that experimentation that once took days now happens in minutes.
How MCP Works (Without the Jargon)
You don't need to understand the technical details to use MCP, but here's the simple version. I'm going to explain this the way I wish someone had explained it to me six months ago.
MCP gives AI three types of capabilities:
Resources are things AI can read. Think files, database records, or API responses. The AI can look at them but doesn't change anything.
Tools are actions AI can take. Writing a file, sending a message, running a search. These actually do things in the world.
Prompts are templates that help structure how you interact with AI. They're shortcuts for common tasks.
When you install an MCP server (more on that in a moment), you're essentially giving your AI assistant a new set of eyes and hands for a specific service or type of data.
How to Get Started (No Coding Required)
If you're using Claude Desktop, getting started is remarkably simple. I set up my first MCP server on a Tuesday afternoon and had it working before dinner. Here's what that process actually looks like.
The easiest path: Desktop Extensions. Open Claude Desktop, go to Settings, then Extensions. Click "Browse extensions" and you'll see a directory of Anthropic-reviewed tools you can install with one click. No configuration files, no command line, no dependencies to manage.
Want more control? Manual configuration works too. You can add MCP servers by editing a configuration file. In Claude Desktop, click Developer in the left sidebar, then Edit Config. This opens a JSON file where you can specify which servers to run.
For example, adding filesystem access looks like this in the config:
{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem"] } } }After restarting Claude Desktop, you'll see a hammer icon in the corner indicating available tools.
Pre-built servers exist for Google Drive, Slack, GitHub, PostgreSQL, and dozens of other services. You're not starting from scratch.
The Ecosystem Is Growing Fast
The numbers tell the story, and honestly, they surprised me. In November 2024, there were roughly 100 MCP servers. By May 2025, that number had grown to over 4,000. The official AAIF announcement now cites 10,000+ active servers, with GitHub indexing over 20,000 MCP-related implementations. That's not gradual adoption. That's a stampede.
Gartner predicts that by 2026, 75 percent of API gateway vendors will have MCP features built in. The protocol isn't just being adopted; it's becoming infrastructure.
This matters because network effects kick in. More servers mean more things your AI can connect to. More connections mean more value from your AI assistant. More value means more adoption. The flywheel's already spinning.
What About Security?
Here's where I need to be straight with you: MCP is powerful, and power creates risk.
The protocol is designed with a "human in the loop" principle. AI shouldn't take actions without your approval. Good MCP implementations show you what tools are available, ask before executing actions, and make it clear what's happening.
But I'd be lying if I said the security story was perfect. Security researchers have identified real concerns, from prompt injection risks to overly broad permissions. The MCP specification explicitly states that hosts must obtain explicit user consent before invoking any tool. That's reassuring on paper.
In practice? I treat MCP servers like I treat browser extensions: only install what you genuinely need, stick to the official options from Anthropic, and pay attention to what permissions you're granting. I've declined to install a few that seemed useful because I couldn't verify who built them. Maybe I'm paranoid. But healthy paranoia has served me well in tech.
What This Means for Your AI Strategy
If you're evaluating AI tools for your organisation, MCP changes the conversation.
The "one model" approach is increasingly obsolete. With MCP, your AI can tap into multiple data sources seamlessly. You're not locked into one vendor's ecosystem.
Integration costs are dropping. Instead of building custom connections for every tool, you implement MCP once. That's a significant reduction in development and maintenance overhead.
The gap between "demo" and "production" is shrinking. MCP servers make it practical to move from impressive demos to tools that actually connect to your real systems.
Key Takeaways
If you use AI tools daily:
You'll increasingly see MCP-powered features appearing in your favourite apps. The AI that can "only suggest" is being replaced by AI that can "actually do". This is why.
If you're evaluating AI for your organisation:
Ask vendors about MCP support. It's becoming the standard for how AI connects to enterprise data. I've started asking this in every AI tool evaluation. Products without it already feel limited, and that gap's only going to widen.
If you're curious about trying it:
Start with Claude Desktop and its one-click extensions. Honestly, it'll take you about ten minutes. The barrier to entry has never been lower.
Look, I've been wrong about tech predictions before. I thought smartwatches would be a fad. I thought voice assistants would be more useful by now. So take this with a grain of salt.
But MCP feels different. It isn't revolutionary in the sense of doing something impossible. It's revolutionary in the sense of making something practical that was previously painful. When OpenAI, Google, and Microsoft all agree that's worth supporting? When they adopt a competitor's standard rather than fight over their own?
That's not hype. That's consensus. And in tech, consensus usually wins.
With the Agentic AI Foundation now governing MCP's future, the question of whether this standard will stick has been decisively answered. It's no longer Anthropic's protocol. It's the industry's protocol. That changes everything.
---
Sources
- Anthropic. "Introducing the Model Context Protocol." 25 November 2024. https://www.anthropic.com/news/model-context-pr...
- OpenAI Developers. MCP Adoption Announcement. March 2025. https://x.com/OpenAIDevs/status/190495775582948...
- Demis Hassabis. MCP Support Announcement. 9 April 2025. https://x.com/demishassabis/status/191010785904...
- TechCrunch. "OpenAI adopts rival Anthropic's standard for connecting AI models to data." 26 March 2025. https://techcrunch.com/2025/03/26/openai-adopts...
- TechCrunch. "Google says it'll embrace Anthropic's standard for connecting AI models to data." 9 April 2025. https://techcrunch.com/2025/04/09/google-says-i...
- Microsoft. "Integrating Model Context Protocol Tools with Semantic Kernel." 2025. https://devblogs.microsoft.com/semantic-kernel/...
- Block. "MCP in the Enterprise: Real World Adoption at Block." 21 April 2025. https://block.github.io/goose/blog/2025/04/21/m...
- Bloomberg. "Closing the Agentic AI Productionization Gap." July 2025. https://www.bloomberg.com/company/stories/closi...
- Claude Support. "Getting started with local MCP servers on Claude Desktop." 2025. https://support.claude.com/en/articles/10949351...
- Model Context Protocol. "Example Servers." https://modelcontextprotocol.io/examples
- Model Context Protocol. "Architecture Overview." https://modelcontextprotocol.io/docs/learn/arch...
- MCP Evals. "MCP Statistics." 2025. https://www.mcpevals.io/blog/mcp-statistics
- Linux Foundation. "Agentic AI Foundation Formation." 9 December 2025. https://www.linuxfoundation.org/press/linux-fou...
- K2View. "MCP Gartner Insights." 2025. https://www.k2view.com/blog/mcp-gartner/
- Model Context Protocol. "Security Best Practices." https://modelcontextprotocol.io/specification/d...
- Wikipedia. "Model Context Protocol." https://en.wikipedia.org/wiki/Model_Context_Pro...
