What is MCP and Why Should You Care?

Another week has gone by in the AI space and MCP continues to pick up steam. The Model Context Protocol (MCP) was introduced in November 2024, but has seen most of its growth since it was explained in-depth at the AI Engineer Summit in April 2025.

So, what is MCP? And why should you care?

Just like ChatGPT drove LLMs into the mainstream public view, MCP is driving Agentic AI into the spotlight. MCP provides as standard way for AI Agents and tools to interact removing the need for a large amount of custom code to hook a tool up to a model or to connect models so they can interact.

MCP is kinda like the Swiss Army Knife of Agentic AI. With an LLM, you have your basic knife. Like a normal knife is really good at cutting and can help with other things in a pinch, an LLM is really good at natural language but can't really do anything beyond that. MCP allows you to slap a screwdriver on your LLM and also make it good at other tasks. And you can keep slapping on other tools or bringing in other, specialized models so that you have the AI Agent you need for the problems you're solving.

MCP allows you to focus on the tools and agents you're adding to your model rather than worrying about the glue that's holding it all together.

After Anthropic introduced MCP, we've seen many big players, including OpenAI and Google, back the protocol and support its use. Since then, we've seen an explosion of MCP servers and clients enabling the use of MCPs.

So, how's this all work?

Anthropic has some great resources to help you understand this in-depth, but let me provide a high-level overview.

Let's assume you have something like Claude Desktop running on your computer. Let's also assume that you often want to refer your LLM to files on your machine. Perhaps you're a writer and have various drafts saved on your machine and you want your LLM to provide you feedback on those drafts.

Out of the box, Claude cannot access files on your system. So, if you want feedback on a draft, you need to copy and paste the entire draft into the chat window and ask for feedback. This can get cumbersome the more you do it and the larger your drafts are.

Enter MCP.

With a file system MCP, you can give Claude access to the files on your system. Once you configure the file system MCP, you can chat with Claude about files on your system. It will recognize that you are talking about files on your computer. It will see it has a tool available to it that allows it to find these files. It will then invoke the tool to retrieve the file and use it as part of the context it uses when responding to your prompts.

Said another way, MCP is like giving your LLM hands to interact with things it otherwise could never interact with.

The amazing thing is that anyone can make these. If you work for a software company, you could make an MCP for your software to allow your users to interact with your software through their LLM. As more and more traffic is directed through LLMs, this will be a business advantage to you.

With MCP, there are two main aspects: clients and servers. MCP clients are where the functions of the MCP are used. It's the user's laptop running a model and configuring tools for that model.

An MCP server is where you define what the MCP can do. Typically these would be written by companies that want to enable you to access their products and services through your LLM, though anyone can write them.

In order for an MCP to work, you'll need an MCP client such as Claude Desktop or fast-agent. You can then configure MCPs in those applications and hook them up to your models. Some of these MCPs will be python packages or docker images that your client will download and execute locally. Others will be servers that your client connects to and sends HTTP or SSE requests to. If you have data privacy concerns, it's important to investigate the nature of these MCPs to understand what data they have access to and how they use that data as well as what they can provide to or inject into your prompts with your model.

Once you have an MCP client configured with the MCPs you want, your model can then utilize those tools to perform actions it otherwise would've been unable to do.

MCP is particularly exciting because it makes it possible for you to provide AI abilitiies to your users without having to host your own model. You can provide an MCP to your users and they can configure their own models to use those MCPs. Then their models can leverage the tools you expose to interact with your product. Exposing abilities via MCP reduces your oprating cost as you don't have to host a model yourself. Or, if you do still host a model, you can outsource a portion of the abilities you want your users to have via the MCP and reserve your model for the tasks that are differentiators for you and your product.

So, why should you care about MCP?

Well, you might not care about the details of the protocol itself, but this much is certain: MCP is reshaping the world of Agentic AI bringing the benefits of these workflows to the mass market. While you may not care about the details, I'm certain you'll care about being able to strap more tools to your LLM so that it can be more useful for you.

We're not far away from a day where you can tell your LLM about a trip and it will go and make all the arrangements, book flights, reserve hotels, purchase tickets, and anything else you need all on its own. MCP is one critical piece that brings us a step closer to having AI execute complex, nondeterminitistic workflows such as that on your behalf freeing you up so you can do more of the things you love rather than getting mired in the details of tasks that are common in daily life.

Comments