In my previous post, I covered how to interact with a local LLM from .NET using LM Studio. In this post, I’m going to take that a little further and build an MCP server - which lets tools like Cursor, VS Code, and LM Studio itself call into your code.
MCP (Model Context Protocol), it’s an open standard that lets AI tools discover and invoke “tools” that you define. It’s like a contract between an AI assistant and your code: you describe what your tools do, and the AI decides when to call them. The full code is in my offline-ai project.
MCP Server
Here’s the important (and most of the) code in Program.cs for an MCP server:
var builder = Host.CreateApplicationBuilder(args);
builder.Services.AddSingleton<McpFileLogger>();
builder.Services.AddSingleton<LmStudioClient>(sp =>
{
var logger = sp.GetRequiredService<McpFileLogger>();
return new LmStudioClient(baseUrl, configModel, logger);
});
builder.Services
.AddMcpServer()
.WithStdioServerTransport()
.WithToolsFromAssembly();
await builder.Build().RunAsync();Obviously Microsoft are doing the heavy lifting here, but I strongly suspect the weight isn’t too large!
The Important Bits…
AddMcpServer()- registers the MCP server in the DI container.WithStdioServerTransport()- tells it to communicate over stdin/stdout, which is how MCP Clients (like LM Studio) talk to the server.WithToolsFromAssembly()- automatically discovers all the tools in your assembly.
That’s all she wrote. The next part is defining the tools…
Defining a Tool
An MCP tool is just a static method with a couple of attributes. Here’s one that creates a file:
[McpServerToolType]
public static class FileTools
{
[McpServerTool, Description("Creates or overwrites a file at the specified path with the given content.")]
public static string CreateFile(
McpFileLogger logger,
[Description("Full or relative path where the file will be created")]
string path,
[Description("Content to write into the file")]
string content)
{
var resolvedPath = Path.GetFullPath(path);
var directory = Path.GetDirectoryName(resolvedPath);
if (!string.IsNullOrEmpty(directory))
Directory.CreateDirectory(directory);
File.WriteAllText(resolvedPath, content);
return $"Successfully created file at: {resolvedPath} ({content.Length} bytes)";
}
}The class has [McpServerToolType] and each tool method has [McpServerTool]. The Description attributes are important - they’re what the AI reads to decide whether to call your tool and what to pass it. The parameters annotated with [Description] become the tool’s input schema.
The McpFileLogger parameter is resolved from the DI container automatically. You can inject any registered service into your tool methods - the MCP SDK handles it.
Here’s a second tool on the same class:
[McpServerTool, Description("Appends content to an existing file, or creates the file if it does not exist.")]
public static string AppendToFile(
McpFileLogger logger,
[Description("Full or relative path to the file")]
string path,
[Description("Content to append to the file")]
string content)
{
var resolvedPath = Path.GetFullPath(path);
var directory = Path.GetDirectoryName(resolvedPath);
if (!string.IsNullOrEmpty(directory))
Directory.CreateDirectory(directory);
File.AppendAllText(resolvedPath, content);
return $"Appended {content.Length} bytes to: {resolvedPath}";
}All very straightforward!
Adding LLM-Backed Tools
Because the MCP server connects to a local LLM via LM Studio, We also need tools that let the AI client chat with the local model:
[McpServerToolType]
public static class LmStudioTools
{
[McpServerTool, Description("Send a message to the local LLM and get a chat-style response.")]
public static async Task<string> ChatWithLlm(
LmStudioClient lmStudio,
McpFileLogger logger,
[Description("The message or prompt to send to the LLM")]
string message)
{
var response = await lmStudio.ChatAsync(message);
return response;
}
}The LmStudioClient is the same HttpClient wrapper from the previous post - it calls LM Studio’s OpenAI-compatible endpoint at http://localhost:1234/v1/chat/completions. Because it’s registered in the DI container, the MCP SDK injects it automatically.
The NuGet Package
There’s a single NuGet package (still in preview at the time of writing):
Install-Package ModelContextProtocol -Version 0.9.0-preview.2Along with the standard hosting packages:
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.Hosting" Version="9.0.0" />
<PackageReference Include="ModelContextProtocol" Version="0.9.0-preview.2" />
</ItemGroup>Plugging It Into Your Client
Once you’ve got the server, you need to tell your AI tool where to find it. Tools like Cursor, VS Code, et al.have an MCP config (my understanding is this is a non-standard standard):
{
"mcpServers": {
"mcp-server-offline-llm": {
"command": "C:\\path\\to\\publish\\McpServerOfflineLlm.exe",
"args": []
}
}
}The exe needs to be self-contained; otherwise you need the SDK.
Logging
Because MCP uses stdin / stdout for its protocol, you can’t use Console.WriteLine for debugging. Anything you write to stdout gets interpreted as MCP messages. You need to write to stderr instead:
Console.Error.WriteLine("Bad things happened!");
Console.Error.WriteLine("Or... good things happened - all are errors!");Bad Intentions
The project also includes a standalone host that demonstrates something more interesting: using the LLM to classify user intent before deciding which tool to call.
The idea is that instead of relying on keyword matching (e.g. “does the message contain the word ‘create’?”), you ask the LLM to classify the user’s intent as structured JSON:
{
"intent": "create_file",
"confidence": 0.95,
"reason": "Explicit create with path and content",
"path": "C:\\temp\\notes.txt",
"content": "Hello World"
}If the confidence is above a threshold (0.8), the host calls the MCP tool. Otherwise, it falls back to standard chat. The classification uses a low temperature (0.1) for deterministic output - the same structured JSON technique from the previous post.
This is essentially using one LLM call to decide whether to make a second one - or to call a tool instead. It’s a useful pattern if you want your MCP server to be smarter about when it acts.
Summary
Not exactly rocket science for the implementation. The MCP server itself is about 12 lines. Each tool is a static method with attributes. The SDK handles discovery, serialisation, transport, and DI injection. If you can write a .NET method, you can write an MCP tool.
It’s interesting because, since using tools like Cursor, Claude, et al., I’ve started moving away from .Net - it has too many dependencies - which is suddenly the most arduous part of writing code. But MS have done an amazing job of making the MCP abstraction simple and easy to use.
The full source is on GitHub.
References
Model Context Protocol Specification