If you're like me, you've spent the last few months watching MCP hype build while asking one question: "Should I actually build one of these for my product?"
I'll confess something: I built my first MCP server before properly asking that question. Classic developer mistake. Got excited about the tech, spent a weekend hacking, then realised I didn't actually need it for that project. The code's still sitting in a repo somewhere, gathering digital dust.
So let me save you that weekend.
The Model Context Protocol has genuine momentum. OpenAI, Google, and Microsoft have all adopted it. In December 2025, Anthropic donated MCP to the newly formed Agentic AI Foundation, with all three competitors as founding members. Block reports 50 to 75 percent time savings on common engineering tasks. The ecosystem's grown from 100 servers at launch to over 5,500 servers with 31 million weekly downloads.
But momentum isn't the same as "right for your project". Let's work through this properly.
The Decision Framework: When MCP Makes Sense
Before diving into code, you need to answer a more fundamental question: does your product actually benefit from MCP integration?
Build an MCP Server When...
You're solving the M times N problem. If your users are constantly asking "can your app work with Claude?" and "does it integrate with Cursor?" and "what about ChatGPT?", MCP lets you build one integration that works with all of them. One server, many clients.
Your product has data that AI would benefit from accessing. If you've got a database, an API, or a document store that users frequently ask AI about, an MCP server makes that data directly accessible rather than requiring copy-paste gymnastics.
You're building developer tools or productivity software. The current MCP ecosystem skews heavily toward development workflows. GitHub, Jira, Slack, databases. If your product fits that category, you're building for an audience that's already using MCP.
AI agents need to take actions in your system. Beyond just reading data, if there's value in letting AI create records, trigger workflows, or modify state in your application, MCP's tool capability handles that cleanly.
You want to participate in the AI ecosystem rather than be locked out of it. This is strategic. As more AI assistants adopt MCP, products without MCP servers become harder to integrate. You're building for future compatibility.
Don't Build an MCP Server When...
A simple API would do the job. If your integration needs are straightforward and you're not expecting AI clients specifically, a well-documented REST API is simpler to build and maintain. MCP adds a layer you might not need. I've seen too many developers reach for the new shiny thing when a boring REST endpoint would've been done in an afternoon.
Your data is too sensitive for any AI access. MCP servers expose data to AI models. If your compliance requirements prohibit that, or if the security overhead isn't worth the benefit, skip it. And be honest with yourself here. "We'll figure out the compliance later" is not a strategy.
You don't have resources to maintain it. This isn't a "build and forget" technology. SDKs update, security patches need applying, and the protocol itself is still evolving. Budget for ongoing maintenance.
An existing MCP server already covers your use case. Check the official server list before building. If someone's already built a PostgreSQL server or a GitHub server, you probably don't need another one.
You need maximum performance for time-sensitive operations. MCP adds a reasoning layer between the request and your data. For real-time applications, stock tickers, IoT sensors, or anything latency-critical, direct API calls are faster.
Understanding the Architecture
If you've decided to build, let's talk about what you're actually building.
MCP Server vs MCP Client: Which Are You Building?
Most developers building MCP integrations are building servers. Here's the distinction:
MCP Clients are AI applications that connect to servers. Claude Desktop is a client. Cursor is a client. ChatGPT with MCP support is a client. Unless you're building an AI assistant yourself, you're not building a client.
MCP Servers are lightweight programs that expose your data and functionality. They're what you build to make your product AI-accessible. A server that exposes your app's database, or wraps your API, or provides access to your document storage.
MCP Hosts are the container applications that run clients. Claude Desktop hosts the Claude client. Your IDE hosts its MCP client. You're probably not building this either.
The architecture documentation explains this as a "client-host-server model" where each host can run multiple clients, and each client maintains a one-to-one relationship with a particular server.
The Three Capabilities: Resources, Tools, and Prompts
Your MCP server can expose three types of things:
Resources are read-only data. File contents, database records, API responses. The AI can look at them but can't change them. Think of resources as the "eyes" you're giving the AI into your system. They're exposed via URIs and accessed through resources/list and resources/get methods.
Tools are actions the AI can execute. Write a file, send a message, create a record, trigger a workflow. These are the "hands". The AI discovers them via tools/list and executes them via tools/call. Tools require explicit user consent before invocation. That's not optional.
Prompts are reusable templates that structure how users interact with your server. They're shortcuts for common tasks. Less commonly implemented than resources and tools, but useful for complex workflows.
Tasks are a newer capability added in the November 2025 spec update. They enable asynchronous "call-now, fetch-later" operations for long-running work. If your server needs to handle operations that take significant time (large file processing, complex API calls), tasks let you return immediately and report progress over time. Still maturing, but worth knowing about.
The typical pattern is: start with resources (read-only is safer), add tools once you've validated the integration works, and prompts or tasks if you need them.
Transport Mechanisms: stdio vs HTTP
Your server needs to communicate with clients somehow. Two main options:
stdio is for local servers. The client launches your server as a subprocess, sends JSON-RPC messages to stdin, receives responses from stdout. It's fast (no network overhead), simple to implement, and perfect for desktop applications. This is how Claude Desktop typically runs local MCP servers.
Streamable HTTP is for remote servers. Added in the March 2025 spec update, it's the current recommended approach for web-hosted servers. Your server exposes an HTTP endpoint, clients POST requests to it, and you can optionally use Server-Sent Events for streaming responses.
SSE (Server-Sent Events) is the legacy remote approach. Still supported for backward compatibility, but new implementations should use Streamable HTTP.
The decision is straightforward: local application? Use stdio. Web service? Use Streamable HTTP.
Important breaking change: The June 2025 spec update removed JSON-RPC batching support (which had only been added in March 2025). If you were relying on batched requests, you'll need to update your implementation to send individual requests instead.
SDK Showdown: TypeScript vs Python
Both official SDKs are actively maintained and feature-complete. Here's how to choose.
Fair warning: I've got opinions here. I've built with both, and they're different experiences.
TypeScript SDK
Current version:1.24.3 (as of December 2025)
Install:npm install @modelcontextprotocol/sdk zod
The TypeScript SDK is mature and well-documented. It's the natural choice if you're already in a Node.js ecosystem, building for web deployment, or prefer static typing.
Here's a minimal server:
import { Server } from "@modelcontextprotocol/sdk/server/index.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import { CallToolRequestSchema, ListToolsRequestSchema, } from "@modelcontextprotocol/sdk/types.js"; const server = new Server( { name: "my-server", version: "1.0.0" }, { capabilities: { tools: {} } } ); server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [{ name: "greet", description: "Return a greeting", inputSchema: { type: "object", properties: { name: { type: "string" } }, required: ["name"], }, }], })); server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name === "greet") { const { name } = request.params.arguments as { name: string }; return { content: [{ type: "text", text: `Hello, ${name}!` }] }; } throw new Error(`Unknown tool: ${request.params.name}`); }); const transport = new StdioServerTransport(); await server.connect(transport);Strengths: Strong typing, good IDE support, extensive examples in the repo, handles Streamable HTTP well.
Watch out for: Zod is a peer dependency (v3.25+). Don't forget to install it. I wasted 20 minutes once wondering why my server wouldn't start. It was Zod. It's always Zod.
Python SDK
Current version:1.23.2 (as of December 2025)
Requires: Python 3.10, 3.11, 3.12, or 3.13
Install:uv add "mcp[cli]" or pip install "mcp[cli]"
The Python SDK includes FastMCP, a high-level abstraction that makes building servers remarkably concise. It's the better choice for data science workflows, when wrapping existing Python APIs, or if you prefer less boilerplate.
Here's the equivalent server in Python:
from mcp.server.fastmcp import FastMCP mcp = FastMCP("my-server") @mcp.tool() def greet(name: str) -> str: """Return a greeting""" return f"Hello, {name}!" if __name__ == "__main__": mcp.run(transport="stdio")That's it. FastMCP handles the schema generation from your type hints.
Strengths: FastMCP is genuinely elegant, async/await support throughout, excellent for data pipelines.
I'll admit it: the first time I saw FastMCP in action, I got annoyed. Not because it's bad. Because it's so good that it made my TypeScript server look verbose by comparison. If you're a Python developer, FastMCP is one of those "why isn't everything this simple" moments.
Watch out for: Python 3.10+ is mandatory. Older Python versions won't work. Yes, you need to upgrade your ancient Python 3.8 installation. No, there isn't a workaround.
Other SDKs
The ecosystem's broader than just TypeScript and Python:
- Kotlin SDK: Maintained with JetBrains, supports JVM and WebAssembly
- Java SDK:Spring AI integration, announced at Build 2025
- C# / .NET: Maintained with Microsoft, available as
ModelContextProtocolon NuGet - Go, Ruby, Rust, Swift: Community maintained, varying levels of maturity
Microsoft's also published a comprehensive MCP for Beginners curriculum covering all major languages.
Security: The Part the Tutorials Skip
Here's where it gets serious. And honestly, it's the part that keeps me up at night.
MCP is powerful, and power creates attack surface. Most tutorials gloss over this because security isn't sexy. But you're shipping code that exposes your system to AI models. That's a big deal, and pretending otherwise is negligent.
In April 2025, security researchers published findings that should inform every MCP implementation. Simon Willison's analysis and Trail of Bits' research identified real vulnerabilities. Read both. Seriously. Before you ship anything.
Critical security update (December 2025):CVE-2025-6514 was disclosed with a CVSS score of 9.6. It affects the mcp-remote package's OAuth implementation. If you're using remote MCP connections with OAuth, ensure you're on the latest SDK versions. The protocol now officially treats MCP servers as OAuth 2.0 Resource Servers (not Authorization Servers), requiring RFC 8707 Resource Indicators.
The Core Problem: Prompt Injection
LLMs trust anything that sounds convincing. When your MCP server returns data to an AI model, that data becomes part of the conversation. If an attacker can control what your server returns, they can inject instructions the AI will follow.
This isn't theoretical. Researchers demonstrated attacks where malicious content in tool descriptions could manipulate model behaviour before any tool was even called.
Line-Jumping Attacks
Trail of Bits coined "line jumping" for attacks that work during the connection phase, not during tool execution.
Here's what happens: when a client connects to your server, it requests available tools via tools/list. Your server returns tool descriptions. Those descriptions get added to the model's context. If a malicious server includes prompt injection payloads in its tool descriptions, the attack happens before any tool is invoked.
The implication: MCP's "human in the loop" protections for tool execution don't help if the attack happens during discovery.
This is what frustrates me about the "human in the loop" talking point. It sounds reassuring. But if you're compromised before you even click anything, what good is that protection?
Tool Poisoning Attacks
A related vulnerability that's emerged since: tool poisoning. This is where malicious instructions are embedded directly in tool descriptions. The tool might look legitimate ("fetch weather data"), but its description contains hidden instructions that manipulate the AI's behaviour. Astrix Security's 2025 report found that 53% of MCP servers use insecure long-lived credentials, making supply chain attacks a real concern.
The mitigation? Audit every tool description your server exposes. Keep descriptions minimal and functional. Don't accept tool definitions from untrusted sources without inspection.
What You Must Do
The official security documentation is clear, but let me emphasise the critical points:
Explicit user consent is mandatory before any tool invocation. The spec says "SHOULD have a human in the loop". Treat that as "MUST". Show users what tools are available. Confirm before executing. Make it impossible to miss.
Sanitise everything your server returns. Data from your database could contain prompt injection attempts. Escape anything that could be interpreted as instructions. Don't trust user-generated content.
Don't store OAuth tokens insecurely. MCP servers often need credentials for the services they access. Use your OS keychain, not plaintext config files.
Only expose what's necessary. The principle of least privilege applies. If your server doesn't need write access, don't implement write tools. If certain data shouldn't be AI-accessible, don't expose it as a resource.
Alert users when tool definitions change. The "rug pull" attack works by changing a tool's behaviour after initial approval. Your UI should notice and flag this.
Implementation Patterns That Work
Based on what's working in production:
Pattern 1: Read-Only Resource Server
Start here. Expose data as resources, no tools. Users can ask AI about your data, but the AI can't modify anything. This is the safest pattern and often provides 80% of the value.
from mcp.server.fastmcp import FastMCP mcp = FastMCP("my-data-server") @mcp.resource("records://{record_id}") async def get_record(record_id: str) -> dict: """Fetch a record by ID""" return await database.get_record(record_id) @mcp.resource("summary://latest") def get_summary() -> str: """Get a summary of recent activity""" return generate_summary()Pattern 2: Tool Server with Confirmation
When you need write operations, implement them as tools with clear descriptions. The host application handles user confirmation; your job is making the action's effects obvious.
@mcp.tool() async def create_record( title: str, content: str, category: str = "general" ) -> dict: """ Create a new record in the database. This will permanently add a record visible to all users. """ record = await database.create_record( title=title, content=content, category=category ) return {"id": record.id, "created": True}Pattern 3: Multi-Transport Server
For production deployments, you'll often want both local (stdio) and remote (HTTP) access. The SDK supports this:
if __name__ == "__main__": import sys transport = sys.argv[1] if len(sys.argv) > 1 else "stdio" if transport == "http": mcp.run(transport="streamable-http", port=3000) else: mcp.run(transport="stdio")Lessons from Production
Block's Approach
Block's enterprise case study reveals their key decisions:
All MCP servers are authored by their own engineers. They don't use community servers for internal data. This ensures security and quality control.
OAuth for service-level authorisation. Tokens stored in native system keychains, not config files.
Curated server list. Engineers don't install arbitrary servers. There's an approved set, expanded based on validation.
Focus on high-value integrations. Snowflake for data, GitHub and Jira for development, Slack and Google Drive for communication. They didn't try to MCP-enable everything.
Microsoft's Dynamics 365 Journey
Microsoft announced at Build 2025 their Dynamics 365 MCP server with 13 curated tools. By Ignite 2025, they'd evolved to a dynamic server exposing "hundreds of thousands of ERP functions".
The lesson: start curated, expand based on real usage patterns. Their recommendation? Claude Sonnet 4.5 for agents using their ERP server. Model choice matters.
The Honest Assessment
Let me be direct about what you're signing up for. No hype, just reality:
MCP won't kill APIs. The relationship is complementary, not adversarial. MCP servers consume APIs. They don't replace them. Your API skills remain valuable. Every time someone writes "MCP is the future and REST is dead", I roll my eyes so hard I can see my brain.
Documentation is challenging. I've spent hours digging through GitHub issues to understand things that should've been in the docs. Multiple developers have noted the same thing. Budget time for exploration. Budget time for frustration. Accept that you'll be reading source code when the docs fail you.
This is real development effort. Building and maintaining MCP servers requires dedicated resources. Quality, reliability, and security are critical. It's not a weekend project. Well, it can be a weekend project. Just not a good one.
But it's probably worth it. If you're building tools for developers or knowledge workers, MCP access is becoming expected. The ecosystem's real, the adoption's genuine, and the benefits are measurable. I wouldn't be writing this guide if I didn't think it mattered.
As one analysis put it: "You can live without MCP. It's not revolutionary but brings standardisation to the otherwise chaotic space of agentic development."
That standardisation is the point. And standardisation tends to win.
Your First 2 Hours
If you've decided to build, here's how I'd spend my first two hours. Learn from my mistakes:
- Choose your SDK. TypeScript for web-heavy work, Python for data-heavy work. Don't overthink this. Pick the one you know better.
- Start with one resource. Read-only, simple, safe. Resist the urge to build something clever. You'll have time for clever later.
- Test with Claude Desktop. Add your server to the config, restart, verify it appears. When it doesn't work the first time (it won't), check your JSON syntax. It's always the JSON syntax.
- Add one tool. Something low-risk that demonstrates the pattern. A "hello world" that does something real. Seeing Claude call your code for the first time is genuinely satisfying.
- Review security. Before going further, audit what you're exposing and how. This is where most developers skip ahead. Don't be most developers.
The official quickstart walks through this in detail. Microsoft's MCP for Beginners provides a more comprehensive curriculum if you prefer structured learning.
Key Takeaways
Build if:
- Multiple AI clients need access to your data (M times N problem)
- Your users already ask about AI integration
- You're in the developer tools / productivity space
- You can commit to ongoing maintenance
Wait if:
- A REST API would suffice
- Your data has strict compliance requirements
- You're resource-constrained
- There's no clear AI use case yet
The future isn't MCP versus APIs. It's MCP plus APIs, working together to make AI genuinely useful rather than just impressive.
And look, if you've read this far and you're still unsure? That's probably your answer. Build an MCP server when you're excited about what it enables, not because it feels like something you should do. The best integrations I've seen came from developers who had a specific problem they were tired of solving manually.
Find your problem first. The protocol can wait.
---
Sources
- Block. "MCP in the Enterprise: Real World Adoption." 21 April 2025. https://block.github.io/goose/blog/2025/04/21/m...
- MCP Evals. "MCP Statistics." 2025. https://www.mcpevals.io/blog/mcp-statistics
- Model Context Protocol. "Architecture Specification." https://modelcontextprotocol.io/specification/2...
- Model Context Protocol. "Transports Specification." https://modelcontextprotocol.io/specification/2...
- Model Context Protocol. "Example Servers." https://modelcontextprotocol.io/examples
- GitHub. "MCP TypeScript SDK." https://github.com/modelcontextprotocol/typescr...
- GitHub. "MCP Python SDK." https://github.com/modelcontextprotocol/python-sdk
- Simon Willison. "Model Context Protocol has prompt injection security problems." 9 April 2025. https://simonwillison.net/2025/Apr/9/mcp-prompt...
- Trail of Bits. "Jumping the line: How MCP servers can attack you before you ever use them." 21 April 2025. https://blog.trailofbits.com/2025/04/21/jumping...
- Model Context Protocol. "Security Best Practices." https://modelcontextprotocol.io/specification/d...
- Microsoft. "Dynamics 365 ERP Model Context Protocol." 11 November 2025. https://www.microsoft.com/en-us/dynamics-365/bl...
- Spring. "MCP Java SDK Released." February 2025. https://spring.io/blog/2025/02/14/mcp-java-sdk-...
- GitHub. "Microsoft MCP for Beginners." https://github.com/microsoft/mcp-for-beginners
- Zuplo. "Why MCP Won't Kill APIs." 2025. https://zuplo.com/blog/why-mcp-wont-kill-apis
- Shakudo. "What is MCP?" 2025. https://www.shakudo.io/blog/mcp-model-context-p...
- Treblle. "MCP vs Traditional APIs." 2025. https://treblle.com/blog/mcp-vs-traditional-api...
- Linux Foundation. "Agentic AI Foundation Formation." December 2025. https://www.linuxfoundation.org/press/linux-fou...
- MCP Blog. "One Year Anniversary Spec Release." November 2025. http://blog.modelcontextprotocol.io/posts/2025-...
- Model Context Protocol. "June 2025 Changelog." https://modelcontextprotocol.io/specification/2...
- Astrix Security. "State of MCP Server Security 2025." https://astrix.security/learn/blog/state-of-mcp...
- Practical DevSecOps. "MCP Security Vulnerabilities." 2025. https://www.practical-devsecops.com/mcp-securit...
