Intro to MCP: Teach LLMs to do anything
Learn how MCP fits into the bigger picture of AI
Published
Mar 30, 2025
Topic
Artificial Intelligence
A long, long time ago (like 3ish years ago?), LLMs could only complete your sentence. Then OpenAI launched ChatGPT and turned this autocompleting robot into a chat assistant. This was game-changing — but the innovation didn’t stop there. Eventually, they figured out how to give LLMs the ability to use tools, and that changed everything.
Why Do Models Need Tools?
Imagine you ask an LLM, "Which is bigger: 9.11 or 9.9?"
Traditionally, the model would just autocomplete based on its training data — and sometimes it would get the answer right, sometimes not. It wasn’t reliable.
Now imagine you ask the same question to an LLM that has access to a Python runtime and can safely run code. If the model is smart enough to realize that this is a math problem and should be solved using Python, it can compute the answer directly — giving you the correct result 100% of the time.
Clearly, tool usage helps models solve real tasks much better. But wait, weren't tools already a thing even before Anthropic came out with MCPs? Well, yes.
So, What Is an MCP?
MCP — or Model Context Protocol — is just a standard way to call tools. It defines a set of rules or an interface that tool builders and tool users can follow.
This isn't the only interface that exists, but since Anthropic introduced it — and it caught on within the community — it's likely going to stick.
Following this protocol means more users and more tools can easily work together.
Think of it like the HTTP protocol — it standardized how browsers communicate with servers, which massively accelerated the growth of the internet. Similarly, MCP is expected to standardize and accelerate the LLM tool ecosystem.
Previously, if you built a tool for an LLM, it would only work with a particular model (or models from the same company). Work was being repeated unnecessarily.
With MCP, someone can build a tool once using the MCP interface, and any model that supports MCP can use it.
In reality, though, it’s not the model itself that implements MCP. It's the app running the model that does. Different models still call tools using their own schemas, but the tool you're using to interact with the model (like Claude Desktop, or Copilot in VSCode) implements the MCP protocol.
That way, any MCP-compliant tool becomes usable, regardless of the underlying model.
Essentially, the responsibility of ensuring cross-model compatibility has shifted — from the tool builder to the chat app builder. This means people can build much better tools, and those tools can work across different models by default. This has already led to the rise of MCP tool marketplaces like this.
Why This Matters
MCPs are working. People are free to build tools once and have them be compatible across apps and models. The real winners here are the users — since they can now use whatever model they want, with whatever tool they want.