Understanding MCP: The Model Context Protocol
MCP (Model Context Protocol) is quickly becoming one of the most important technologies for integrating language models into real workflows. Most developers are familiar with “tool calling” or “function calling” from systems like OpenAI’s API, but MCP takes this concept further; turning it into a standardized, extensible, and interoperable ecosystem.
This article walks you through MCP, starting with the familiar concept of tool calling, and then building toward its broader capabilities: resources, prompts, file system access, and interoperability.
MCP as Organized Tool Calling
At its simplest, MCP is organized and standardized tool calling. Just like OpenAI’s function calling or LangChain’s tools, MCP lets a model invoke actions outside its own reasoning process, such as:
- Fetching data from an API
- Running computations
- Writing to a file
- Performing side effects in the real world
The difference is that MCP not only standardizes how tools are called, but also how they are discovered, described, and exposed to models and clients. If function calling feels like sending a one-off recipe to a model, MCP feels like giving it an entire kitchen with labeled ingredients and appliances.

Figure 1: A comparison showing how plain function calling is a single request/response, while MCP provides a richer context layer with tools, resources, prompts, and streaming capabilities.
Dynamic Discovery vs. Static Registration
One of the main differences between MCP and traditional function calling is dynamic discovery.
Static Function Calling
With classic tool/function calling:
- You provide a static list of tools when you make the API request.
- The model can only use the tools you specify for that session.
- Adding a new tool usually means updating and redeploying code.
Example:
{
"tools": [
{
"name": "getWeather",
"description": "Get weather for a city",
"parameters": { "type": "object", "properties": { ... } }
}
]
}Dynamic Discovery with MCP
Instead, with MCP, the process is much more flexible:
- The client asks the server: “What tools, resources, and prompts do you have?”
- The server replies with a live registry:
{
"tools": [
{ "name": "getWeather", "inputSchema": { ... }, "description": "..." },
{ "name": "summarizeDocument", "inputSchema": { ... }, "description": "..." }
],
"resources": [
{ "uri": "file:///workspace/notes.md", "kind": "file" }
]
}This means you can:
- Add or remove tools on the fly.
- Dynamically expose new capabilities without redeploying.
- Let users review and approve tools before they are available to the model.
Why It Matters
Dynamic discovery brings several advantages:
- Hot-swappable capabilities – tools appear immediately without restarting the system.
- User safety – clients can request consent for sensitive tools.
- Composable systems – clients can aggregate tools from multiple servers.
- Future-proofing – as MCP evolves, clients adapt without breaking.
Interoperability Between Clients and Servers
Another major advantage of MCP is its interoperability. Previously, each function-calling approach was vendor-specific:
- OpenAI function calling worked only with OpenAI.
- Anthropic’s tool schema worked only with Anthropic.
- LangChain’s tools needed LangChain.
MCP defines a vendor-neutral protocol, allowing:
- Any MCP-compliant client to talk to any MCP-compliant server.
- Tools to be written once and used by multiple models and providers.
- Multiple servers to be combined into one session.
- Clients and servers to be swapped independently.

Figure 2: MCP clients act like a hub, discovering and connecting to multiple servers and exposing their tools and resources to any compatible model.
This is a big deal because it reduces vendor lock-in and creates a plug-and-play ecosystem. It’s like the shift from proprietary networking to HTTP: once there was a standard, browsers could talk to any server, and the web exploded.
Beyond Tools: MCP’s Other Capabilities
MCP supports far more than just tools though, which makes it a kind of “runtime environment” for models.

Figure 3: MCP doesn’t just support tools — it includes resources, prompts, and file system access for a complete model runtime environment.
Resources
Resources are data or documents the server exposes. They can be browsed, fetched, and referenced dynamically:
- Files (e.g.,
file:///workspace/project/README.md) - Database tables (e.g.,
db://sqlite/mydb/users) - External data endpoints
This lets models pull in relevant data without embedding everything in the prompt.
Prompts
Prompts are parameterized templates the server shares with clients:
- Standardizes prompt design
- Allows versioning and reuse
- Makes multi-client collaboration consistent
Example:
{
"prompts": [
{
"name": "summarize_text",
"description": "Summarizes a document in a given style",
"arguments": {
"text": "string",
"tone": { "enum": ["formal", "casual"] }
},
"template": "Summarize the following text in a {tone} tone:\n{text}"
}
]
}File System Access
MCP can expose a safe virtual file system, enabling models to:
- List directories
- Read files
- Write or edit files (with user consent)
This allows robust coding assistants to work safely without giving unrestricted system access.
Putting It All Together
MCP is much more than a tool calling mechanism. It gives models:
- Tools – to take action
- Resources – to consult external data
- Prompts – to use consistent templates
- File System Access – to persist or modify artifacts in a controlled way
Together, these create a complete, standardized, and safe runtime environment. They make MCP an essential building block for developers who want future-proof model integrations that are composable, interoperable, and secure.
Follow Me For More Content
Thanks for reading! And if you liked this post please consider following me on Twitter and LinkedIn for more ML and AI related content.