
26 Feb 2026 Building MCP: Your LLM’s New Backend-For-Frontend
Introduction
Recently we had a client for whom we had developed a new set of public APIs, come to us wanting to build an agent to help customers decide what they need to purchase based on their situation. Given the information is very specific to each individual’s location, and quite esoteric, we investigated using an MCP server as a solution for the agent to access their specific data and act as middleware to execute the APIs we had already developed.
Why MCP?
Current Large Language Models have limitations:
- they have data that may not be current (due to the way they are trained)
- they do not have access to secure data
- limited context window size (as you speak to a LLM, they start to forget earlier parts of the conversation).
MCP servers are a way to get around all of this.
What is an MCP server?
Launched in November 2024 by Anthropic, MCP (Model Context Protocol) is a standard way for LLMs to connect to an external source. It was quickly picked up by other LLM providers such as OpenAI. As of December 2025, MCP has been donated to the Agentic AI Foundation (part of the Linux Foundation).
We have had the ability to supply information into LLMs previously through RAG (Retrieval Augmented Generation), however, this is a one way binding. We weren’t able to do much beyond querying and summarising data. MCP gives us the ability to provide tools that go the other way from the LLM, allowing it to perform actions.
Some examples are:
- file system access to read and write whatever is necessary for the LLM to perform its task
- Github access to
- create Pull Requests on your behalf, filling out all the relevant information,
- even reading existing pull requests to provide information into your current development session.
The possibilities are endless.
In a way, MCP is the USB-C of the AI world (potentially even further, given there is nothing that restricts them to LLMs only). Another way to look at it, MCP servers are in a way a BFF (Backend-For-Frontend) for LLMs.
MCP Concepts
An MCP server can have one or more of the following concepts.
- Tools
- Resources
- Prompts
Tools
These are the endpoints that actually DO the action. Your LLM will call these directly, asking for your permission first (by default – although most MCP clients will allow you to disable this).
Resources
This is just additional data. Typically this is similar to what would be included from RAG. You might be thinking why don’t we just code these directly into the tools? The main reason to use resources is to manage the context window. Items here will only use tokens within the context window when they are in use. Otherwise they don’t contribute to your token use. These typically look like a URI call, eg. todo://item/1
We can also provide Resource Templates. This is like an OpenAPI definition of the resource endpoint (if comparing to the REST world).
One nicety of Resource Templates is that they can provide autocomplete of the parameters to call a resource with. eg. if we only have 20 todo items, typing todo://item/1 could then prompt you for either item 1, or 10-19, etc.
This also provides hints to the LLM as to discovery of valid values.
Prompts
These are reusable prompts for calling into the LLM in a way to access your MCP server. The LLM gets a list of these and provides them to the client so that they can give an example of usage. They also can be parameterised as to which arguments are required for which tool or resource call.
Some MCP clients will even turn these into special commands that are easily called from their command line (eg. MCPHub for Neovim will convert them to slash commands like /mcp:todo_list for a prompt named todo_list).
Security Considerations and Limitations
Given that MCP servers were originally intended to run locally, security has been a bit of an afterthought. The specification was missing some basic tools like authentication.
Typical early (and simple) MCP servers used stdio1 to get data to and from the LLM. HTTP transport was another option, but was very basic. The latest specification can use OAuth2 for authentication (so they can call in to services as you), but if you haven’t worked with it before, it can be tricky to get right.
If you are integrating someone else’s MCP server, be warned that they can do many things to your session such as injecting prompts to reveal your private data2 . Be very careful with what you run.
There’s the other side too. Remember that any LLM you hook your MCP server up to has access to everything that MCP server does. The same as copying and pasting data into the chat window itself. Do you trust the model provider? If not, then you should run a local LLM via a tool such as Ollama.
Another limitation of MCP lies in its use of the JSON format. Different languages have different quirks when it comes to interpreting JSON (this is also an issue with REST services), and this even goes for the data validation. An AI tool might expect an ISO-8601 timestamp and receive a Unix epoch timestamp, causing the model to hallucinate dates rather than failing cleanly.
The spec does support data validation, but doesn’t mandate it.
When connecting to an MCP server, we have no easy way to tell which calls are stateful and which are stateless. Unlike REST, which has GET/POST/PUT/UPDATE/DELETE, or GraphQL ,which has Query and Mutate. Everything just happens on requests, no definitions as to which type. We can hope the descriptions on the server endpoints you’re using will inform you of this.
Debugging chains of MCP service calls can be a pain also. The LLM by default doesn’t provide any type of correlation id, so digging through log files becomes cumbersome.
How do you write an MCP server?
Despite the limitations listed above, it’s still definitely worth writing an MCP server, and it may even be worthwhile adding to the majority of REST/GraphQL servers you write also, given the simple nature of adding them.
The following example is a very basic TODO server, and the full project can be found on Github
First we need our REST server. This is a very basic server with no authentication, writing to a SQLite database, and having just basic CRUD operations. Given that this isn’t the focus of this article, just grab the code from Github.

Next we need to write our MCP server. Conveniently, Anthropic provides a very nice set of libraries to make this easy in various languages. They work much the same way as each other so I’m just using Typescript for this example (note, the Github version has some extra convenience methods not shown here).
// index.ts
// Boilerplate for bringing up the basic server
import {
McpServer,
ResourceTemplate,
} from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
type Todo = {
id: number;
task: string;
completed: boolean;
created_at: string;
};
// Create an MCP server
const server = new McpServer({
name: "todo-server",
version: "0.0.1",
capabilities: {
resources: {}, // This lets us use resource endpoints
},
});
Next we register our various tooling (just a couple of basic ones here, the rest is on Github) – in reality, for such a simple service, we wouldn’t use resource links but return the data directly. However these could potentially be quite large, and use up your whole context window.
server.registerTool(
"list",
{
title: "List all todos ids",
description: "List all the ids of todo items",
},
async () => {
const data = (
await fetch("http://localhost:3000/todos").then((res) => res.json())
).data;
return {
content: [
{ type: "text", text: `Found ${data.length} items` },
...data.map((item: Todo) => ({
type: "resource_link",
uri: `todo://item/${item.id}`,
name: String(item.id),
description: JSON.stringify(item.task),
mimeType: "application/json",
annotations: {
audience: ["assistant"],
priority: 0.9,
},
})),
],
};
},
);
server.registerTool(
"get",
{
title: "Get todo by id",
description: "Get a single todo item by its id",
inputSchema: { id: z.number() }, // Validate the input
},
async ({ id }) => {
const data: Todo = (
await fetch(`http://localhost:3000/todos/${id}`).then((res) => res.json())
).data;
if (data.length) {
return {
content: [
{ type: "text", text: "found todo item" },
{
type: "resource_link",
uri: `todo://item/${id}`,
name: String(id),
description: data.task,
mimeType: "application/json",
annotations: {
audience: ["assistant"],
priority: 0.9,
},
},
],
};
}
throw new Error('item not found');
},
);
Next we add our resource endpoints – we can optionally provide a list of all of the resources for a given template
server.registerResource(
"all",
"todo://all",
{
title: "All todo items",
description: "All todo items",
mimeType: "application/json",
},
async (uri) => {
const data = await fetch(`http://localhost:3000/todos`).then((res) =>
res.json(),
);
return {
contents: [
{
mimeType: "application/json",
uri: uri.href,
text: JSON.stringify(data),
},
],
};
},
);
server.registerResource(
"item",
new ResourceTemplate("todo://item/{id}", { list: undefined }),
{
title: "Single todo item",
description: "Get a single todo item",
mimeType: "application/json",
},
async (uri, { id }) => {
const data = await fetch(`http://localhost:3000/todos/${id}`).then((res) =>
res.json(),
);
return {
contents: [
{
mimeType: "application/json",
uri: uri.href,
text: JSON.stringify(data),
},
],
};
},
);
Finally we start our server:
// Start receiving messages on stdin and sending messages on stdout
const transport = new StdioServerTransport();
await server.connect(transport);
And that’s it: our basic MCP server written. We could provide prompts or autocomplete for resources, but I’ll leave those as exercises for the reader.
One thing to note is that you need to provide properly written names and descriptions for the LLM to parse, so it knows what it should be calling. Descriptions can often end up looking like an agent prompt. For example, if using the Context7 MCP Service, one of the descriptions is:-
Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries.You MUST call this function before 'get-library-docs' to obtain a valid Context7-compatible library ID UNLESS the user explicitly provides a library ID in the format '/org/project' or '/org/project/version' in their query.Selection Process:1. Analyze the query to understand what library/package the user is looking for2. Return the most relevant match based on:- Name similarity to the query (exact matches prioritized)- Description relevance to the query's intent- Documentation coverage (prioritize libraries with higher Code Snippet counts)- Trust score (consider libraries with scores of 7-10 more authoritative)Response Format:- Return the selected library ID in a clearly marked section- Provide a brief explanation for why this library was chosen- If multiple good matches exist, acknowledge this but proceed with the most relevant one- If no good matches exist, clearly state this and suggest query refinementsFor ambiguous queries, request clarification before proceeding with a best-guess match.So don’t worry about keeping them short, this is mainly for the LLM to read anyway, so more context is better.
Now this is set up, you need to provide it to your tool of choice. Typically it will be a JSON file containing the call to the service, but it will vary depending on which client you are using. Here is an example for VSCode/MCPHub/Claude, although they all end up looking similar:
{
"mcpServers": {
"todo-server": {
"command": "tsx",
"args": ["index.ts"]
}
}
}
If you are just debugging and would like to call the endpoints directly, there’s a server you can run:
npx @modelcontextprotocol/inspector tsx index.ts
This will automatically start your default browser, but if it doesn’t, or you wish to use something else, instructions are provided on the command line.
So now, what can we do with it? Fire up your chat window and try asking to get your todo list. Some chat windows will require you to provide the resource – eg #todo-server and possibly the tools @todo

Potential use cases
A good example exists on the modelcontextprotocol.io website of having MCP servers for your calendar and booking a holiday. You provide a list of things you are interested in, and it works out what locations would be good for that, the time of year that is best to see those locations, then it looks at your calendar to work out when you’re free and books your tickets.
In more context to Shine Solutions, some basic ideas I have come up with (and this just scratches the surface of what could be done):-
- One of our clients has many legal documents for different types of activities. This can be overwhelming for a non-professional, especially when different jurisdictions have different requirements. Imagine being able to log in to a website and say “hey, I’d like to do this at that location, which documents do I require?”. We have written services to modernise this client with REST endpoints. We could add an MCP server on top of that, and hook it up to a recent frontend to provide this functionality. Using a legal professional to find the required documents could be a cost barrier to the client.
- Another client provides physical products to professionals in the trade space. For a tradesman to repair an item, they need to look up what products they need for the job based on what is currently there. It would be great for them be able to select a model number ahead of time, and get a list of products that interface with that model, and learn what could commonly cause issues, and to order products before even arriving on site.
Both of these cases already have the data available via API’s, so it’s just a matter of wiring it all up to an LLM so that it can easily make sense of that data.
Resources
- https://modelcontextprotocol.io
- MCP : Demystifying MCP Resources vs. Tools: A Practical Guide for Agentic Automation
- Why MCP’s Disregard for 40 Years of RPC Best Practices Will Burn Enterprises
Footnotes
- How the command line interacts with the system, using stdin, stdout and stderr – when writing a server, anything written to stdout will go to the client, anything written to stderr is safe ↩︎
- Top 10 MCP Security Risks ↩︎

No Comments