MCP (Model Context Protocol) is gaining traction as a standard way for AI tools to call external services. Most examples show MCP clients in TypeScript. Here's how I built one in Python, wiring an npm-based MCP server into a FastAPI async backend.
The Idea
The MCP server I needed wraps the Jira API -- it exposes tools like jira_search_issues, jira_create_issue, and jira_get_transitions. The server speaks JSON-RPC 2.0 over stdio: you spawn it as a subprocess, write requests to its stdin, and read responses from its stdout.
Spawning the Server
Python's asyncio.create_subprocess_exec handles this cleanly:
self._proc
= await asyncio.create_subprocess_exec(
"npx", "--yes",
"@xuandev/atlassian-mcp",
stdin=asyncio.subprocess.PIPE,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
env=env,
)
Credentials (API token, domain, email) go into the env dict -- no hardcoding, no extra config files.
The MCP Handshake
Before calling any tools, MCP requires an initialization exchange. Send an initialize request, get capabilities back, then send a notifications/initialized one-way notification. Only after this handshake are tool calls available.
Request/Response Correlation
The tricky part with stdio JSON-RPC is that notifications can arrive interleaved with responses. The solution: attach an incrementing id to every request and skip responses until the matching id comes back.
async
def _rpc(self, method, params, timeout=30.0):
msg_id = self._next_id
self._next_id += 1
await self._send({"jsonrpc":
"2.0", "id": msg_id, "method": method,
"params": params})
while True:
resp = await self._recv(timeout=timeout)
if resp.get("id") !=
msg_id:
continue
if "error" in resp:
raise JiraMcpError(f"MCP
error: {resp['error']}")
return
resp.get("result")
Each _recv awaits proc.stdout.readline(), so the loop yields to the event loop between reads -- no busy-waiting.
Lifecycle as a Context Manager
Wrapping everything in an async context manager keeps usage clean:
async
with JiraMcpClient() as client:
issues = await
client.search_issues("project = MYPROJ ORDER BY priority ASC")
new_key = await client.create_issue("MYPROJ",
"Fix login bug", "Steps to reproduce...")
The subprocess spawns on __aenter__ and gets killed on __aexit__, exception or not.
What I Learned
The stdio transport is underrated. No ports, no HTTP server, no authentication layer -- just a process. If you need to integrate an MCP-compatible server into a backend that already uses asyncio, this pattern works well and is surprisingly easy to test: mock the subprocess with AsyncMock and feed it canned JSON-RPC responses.
The full client -- handshake, tool dispatch, response parsing, feature-flag gating -- came in under 200 lines of Python. That's a reasonable price for full Jira integration without touching the REST API directly.
No comments:
Post a Comment