Have you ever wanted to connect AI tools like CursorAI or MCP-Inspector to your own custom data sources? Maybe you’ve heard about the Model Context Protocol (MCP) and its ability to bridge large language models (LLMs) with external systems—but you’re not sure where to start. If so, you’re in the right place! In this guide, I’ll walk you through how to implement a Model Context Protocol (MCP) server with Server-Sent Events (SSE) in Python, step-by-step.
By the end, you’ll have a working MCP server that streams real-time updates using SSE, perfect for integrating with AI clients. Whether you’re a beginner or a seasoned developer, this tutorial is designed to be clear, engaging, and packed with practical examples. Let’s dive into the world of MCP and SSE!
Table of Contents
What Is a Model Context Protocol (MCP) Server?
Before we get our hands dirty with code, let’s break down what MCP is and why it matters. The Model Context Protocol (MCP) is an open standard developed by Anthropic in November 2024. It’s like a universal adapter that lets AI models (think Claude, ChatGPT, or others) talk to external tools, data sources, and services in a standardized way.
Why Use MCP?
Imagine you’re building an AI-powered app that needs to fetch live data—like weather updates, file contents, or API results—and feed it to an LLM. Without a standard protocol, you’d have to write custom code for every tool or data source. That’s a headache! MCP solves this by providing a consistent framework, so your AI can interact with anything from a database to a web API without reinventing the wheel.
What’s SSE Got to Do with It?
Server-Sent Events (SSE) is one of the transport mechanisms MCP supports (the other being stdio). SSE is a simple, lightweight way to push real-time updates from a server to a client over HTTP. It’s perfect for MCP because it allows the server to stream data—like tool responses or event notifications—to AI clients as soon as it’s ready. Think of it as a one-way text messaging service from the server to the client.
In this guide, we’ll focus on using SSE to implement a Model Context Protocol (MCP) server with SSE, making it accessible over the web for tools like CursorAI and MCP-Inspector.
Why Implement an MCP Server with SSE?
You might be wondering, “Why SSE instead of something else, like WebSockets?” Great question! Here’s why SSE is a fantastic choice for MCP:
- Simplicity: SSE is easier to set up than WebSockets. It uses standard HTTP, so you don’t need a special protocol or complex handshake.
- Real-Time Updates: SSE excels at pushing data from server to client as it happens—ideal for streaming AI tool responses.
- Lightweight: It’s less resource-intensive than WebSockets, making it perfect for small projects or local development.
- MCP Compatibility: The MCP spec explicitly supports SSE for remote server setups, ensuring your server works with modern MCP clients.
Ready to get started? Let’s set up your environment and build an MCP server from scratch!
Prerequisites: What You’ll Need
Before we dive into coding, let’s make sure you’ve got everything you need to implement a Model Context Protocol (MCP) server with SSE. Don’t worry—these are all beginner-friendly tools:
- Python 3.8+: Most systems come with Python pre-installed, but you can download it from python.org if needed.
- pip: Python’s package manager (it comes with Python).
- A Code Editor: Use VS Code, PyCharm, or even a simple text editor like Notepad++.
- Basic Terminal Knowledge: You’ll run a few commands, but I’ll explain each one.
- Internet Connection: For installing dependencies and testing with MCP clients.
Got all that? Awesome! Let’s install the required libraries next.
Step 1: Setting Up Your Project
To implement a Model Context Protocol (MCP) server with SSE, we’ll use Python with a few key libraries. Here’s how to set up your project:
Create a Project Folder
- Open your terminal or command prompt.
- Create a new directory for your project:
mkdir mcp-sse-server
cd mcp-sse-server
Set Up a Virtual Environment
A virtual environment keeps your project’s dependencies separate from your system’s Python setup. Here’s how:
- Create the environment:
python -m venv venv
- Activate it:
- On Windows:
venv\Scripts\activate
- On macOS/Linux:
source venv/bin/activate
You’ll see (venv)
in your terminal—proof it’s working!
Install Required Libraries
We’ll use:
- FastAPI: A modern web framework for building APIs (and SSE endpoints).
- uvicorn: A server to run FastAPI apps.
- sse-starlette: Adds SSE support to FastAPI.
Install them with one command:
pip install fastapi uvicorn sse-starlette
That’s it! Your environment is ready. Let’s write some code.
Step 2: Writing the MCP Server Code
Now comes the fun part—coding your MCP server! We’ll create a simple server that supports SSE and handles basic MCP requests, like initialization and tool queries. Here’s the plan:
- Set up a FastAPI app.
- Add an SSE endpoint for server-to-client streaming.
- Handle POST requests for client-to-server communication.
- Implement basic MCP functionality (e.g., an “initialize” method).
Basic MCP Server Code
Create a file called server.py
in your project folder and add this code:
import asyncio
from fastapi import FastAPI, Request
from sse_starlette.sse import EventSourceResponse
import json
import logging
# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("mcp-server")
app = FastAPI()
# Store connected clients (for simplicity, using a basic list)
clients = []
# SSE endpoint for streaming events to clients
@app.get("/sse")
async def sse_endpoint():
async def event_generator():
# Send an initial "endpoint" event with a message URL
session_id = "example-session-123"
yield {
"event": "endpoint",
"data": f"/sse/messages?session_id={session_id}"
}
# Keep the connection alive with periodic pings
while True:
await asyncio.sleep(5) # Send a ping every 5 seconds
yield {
"event": "ping",
"data": "Server is alive!"
}
return EventSourceResponse(event_generator())
# POST endpoint for client messages
@app.post("/sse/messages")
async def post_handler(request: Request):
try:
body = await request.json()
logger.info(f"Received message: {body}")
# Handle MCP "initialize" method
if body.get("method") == "initialize":
response = {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {"listTools": True},
"resources": {}
},
"serverInfo": {"name": "MyMCPserver", "version": "1.0.0"}
}
}
return response
# Handle unknown methods
else:
return {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"error": {
"code": -32601,
"message": f"Method '{body.get('method')}' not found"
}
}
except Exception as e:
logger.error(f"Error processing request: {e}")
return {"error": "Invalid request"}
# Run the server
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Code Breakdown
Let’s unpack what’s happening here:
- Imports: We bring in
FastAPI
for the web framework,sse-starlette
for SSE support, andasyncio
for async operations. - Logging: Basic logging to track what’s happening (super helpful for debugging).
- FastAPI App: The
app
object is our server. - SSE Endpoint (
/sse
):
- Streams events to clients.
- Sends an initial “endpoint” event with a URL for POST messages.
- Keeps the connection alive with periodic “ping” events.
- POST Endpoint (
/sse/messages
):
- Handles incoming JSON-RPC requests from clients.
- Responds to the “initialize” method with server capabilities.
- Returns an error for unknown methods.
- Running the Server: Uses
uvicorn
to serve the app on port 8000.
This is a minimal MCP server, but it’s enough to get started with SSE. Let’s test it!
Step 3: Running and Testing Your MCP Server
Time to see your server in action! Follow these steps:
Start the Server
- In your terminal (with the virtual environment active), run:
python server.py
- You should see output like:
INFO: Started server process [12345]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
Your server is live at http://localhost:8000
!
Test the SSE Endpoint
Open a browser or use a tool like curl
to check the SSE endpoint:
curl http://localhost:8000/sse
You’ll see something like:
event: endpoint
data: /sse/messages?session_id=example-session-123
event: ping
data: Server is alive!
The connection stays open, and you’ll get a “ping” every 5 seconds. That’s SSE working!
Test with a POST Request
Use curl
or a tool like Postman to send a JSON-RPC “initialize” request:
curl -X POST http://localhost:8000/sse/messages \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "id": 1, "method": "initialize", "params": {"clientInfo": {"name": "test-client"}}}'
Expected response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {"listTools": true},
"resources": {}
},
"serverInfo": {"name": "MyMCPserver", "version": "1.0.0"}
}
}
Success! Your MCP server is responding to requests via SSE.
Step 4: Adding MCP Features (Tools, Resources, and Prompts)
A basic MCP server is cool, but it’s not very useful without features. MCP supports three main capabilities: tools, resources, and prompts. Let’s add a simple tool to our server.
What Are Tools, Resources, and Prompts?
- Tools: Functions the AI can call (e.g., “get_weather” or “uppercase_text”).
- Resources: Data the AI can read (e.g., files or database records).
- Prompts: Predefined templates to guide AI responses.
We’ll implement a tool called uppercase_text
that converts text to uppercase.
Update the Server Code
Modify server.py
to include the tool. Here’s the updated version (changes highlighted):
import asyncio
from fastapi import FastAPI, Request
from sse_starlette.sse import EventSourceResponse
import json
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("mcp-server")
app = FastAPI()
clients = []
# Define a simple tool
TOOLS = [
{
"name": "uppercase_text",
"description": "Convert text to uppercase",
"inputSchema": {
"type": "object",
"properties": {"text": {"type": "string"}},
"required": ["text"]
}
}
]
@app.get("/sse")
async def sse_endpoint():
async def event_generator():
session_id = "example-session-123"
yield {"event": "endpoint", "data": f"/sse/messages?session_id={session_id}"}
while True:
await asyncio.sleep(5)
yield {"event": "ping", "data": "Server is alive!"}
return EventSourceResponse(event_generator())
@app.post("/sse/messages")
async def post_handler(request: Request):
try:
body = await request.json()
logger.info(f"Received message: {body}")
method = body.get("method")
params = body.get("params", {})
# Handle "initialize"
if method == "initialize":
response = {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"result": {
"protocolVersion": "2024-11-05",
"capabilities": {
"tools": {"listTools": True, "callTool": True},
"resources": {}
},
"serverInfo": {"name": "MyMCPserver", "version": "1.0.0"}
}
}
return response
# Handle "listTools"
elif method == "listTools":
return {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"result": {"tools": TOOLS}
}
# Handle "callTool"
elif method == "callTool":
tool_name = params.get("name")
if tool_name == "uppercase_text":
text = params.get("arguments", {}).get("text", "")
result = text.upper()
return {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"result": {"output": result}
}
else:
return {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"error": {"code": -32602, "message": "Invalid tool name"}
}
# Unknown method
else:
return {
"jsonrpc": "2.0",
"id": body.get("id", 0),
"error": {"code": -32601, "message": f"Method '{method}' not found"}
}
except Exception as e:
logger.error(f"Error processing request: {e}")
return {"error": "Invalid request"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
What’s New?
- TOOLS List: Defines the
uppercase_text
tool with its schema. - listTools Method: Returns the available tools when queried.
- callTool Method: Executes the
uppercase_text
tool and returns the result.
Test the Tool
Restart the server (python server.py
), then test the tool:
- List Tools:
curl -X POST http://localhost:8000/sse/messages \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "id": 2, "method": "listTools"}'
Response:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "uppercase_text",
"description": "Convert text to uppercase",
"inputSchema": {
"type": "object",
"properties": {"text": {"type": "string"}},
"required": ["text"]
}
}
]
}
}
- Call the Tool:
curl -X POST http://localhost:8000/sse/messages \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "id": 3, "method": "callTool", "params": {"name": "uppercase_text", "arguments": {"text": "hello world"}}}'
Response:
{
"jsonrpc": "2.0",
"id": 3,
"result": {"output": "HELLO WORLD"}
}
It works! Your MCP server now supports a functional tool.
Step 5: Connecting to Real MCP Clients
So far, we’ve tested manually, but the real magic happens when you connect your server to MCP clients like CursorAI or MCP-Inspector. Here’s how:
Configure MCP-Inspector
- Install MCP-Inspector (check its GitHub page for instructions).
- Point it to your server’s SSE endpoint:
http://localhost:8000/sse
. - Watch it connect, initialize, and list your tools!
Configure CursorAI or Claude Desktop
For tools like CursorAI or Claude Desktop:
- Edit their config file (e.g.,
claude_desktop_config.json
for Claude):
{
"mcpServers": {
"my-sse-server": {
"url": "http://localhost:8000/sse",
"transport": "sse"
}
}
}
- Restart the app and interact with your server via the UI.
You’ll see the AI query your tools and use the results—pretty cool, right?
Troubleshooting Common Issues
Running into problems? Here are some fixes:
- SSE Not Connecting: Check your firewall or ensure port 8000 is open.
- Method Not Found: Verify your JSON-RPC request matches the server’s methods.
- Server Crashes: Look at the logs (
logger.info
messages) for clues.
Table: MCP Server Methods Overview
Method | Purpose | Example Request |
---|---|---|
initialize | Sets up the connection | {"method": "initialize", "params": {"clientInfo": {"name": "test"}}} |
listTools | Lists available tools | {"method": "listTools"} |
callTool | Executes a tool | {"method": "callTool", "params": {"name": "uppercase_text", "arguments": {"text": "hi"}}} |
Conclusion: Your MCP Journey Starts Here
Congratulations—you’ve just learned how to implement a Model Context Protocol (MCP) server with SSE! From setting up FastAPI to adding tools and testing with real clients, you’re now equipped to connect AI models to your custom data sources. This is just the beginning—try adding more tools, resources, or even deploying your server online.
What will you build next? Let me know in the comments, and happy coding!