🎯 Launch offer: first 3 clients get 40% off in exchange for a public testimonial — email hello@mcpdone.com with your tier + project.
← All posts

The FastMCP wrapper-layer bug your unit tests won't catch

· mcpfastmcppythontestingbugsasyncio

We shipped a Twitter-reader MCP server with 42 passing unit tests. Every reader function, mocked with pytest-httpx, behaved exactly as specified. Coverage was high. The static type-checker was clean. CI was green. We pushed it live and started using it.

The first time the model called a tool, the server returned:

RuntimeError: asyncio.run() cannot be called from a running event loop

Every single tool. All four of them. Every call. The bug was uniform.

Forty-two passing unit tests, and the very first real call broke. This post is about why that happens, the one-line fix, and the lint hook we open-sourced so you don’t have to learn this the way we did.

The shape of the bug

FastMCP — the high-level wrapper from the Anthropic MCP SDK — lets you register tools with a decorator:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("my-server")

@mcp.tool()
def get_account_status() -> dict:
    """Tells the client about the account."""
    return _query_api()

The @mcp.tool() decorator inspects the function’s signature, generates a JSON-Schema description, registers it with the server, and routes incoming MCP protocol calls to it. Beautiful. Five lines and you have a tool.

Now: what if _query_api() is async? You have a fully-async client because you’re calling an HTTP API and want concurrent fetches. Most of your codebase is async def. Your inner function is async def query_api() -> dict.

You write what looks like the obvious shape:

import asyncio

@mcp.tool()
def get_account_status() -> dict:
    return asyncio.run(_query_api())  # bridge sync tool to async work

Sync wrapper, async work, asyncio.run() to run the coroutine. Looks fine in isolation. Tests pass — your tests import _query_api directly and await it; they never go through the wrapper. The static checker is happy. The thing imports cleanly.

You ship it. The model invokes the tool. The server raises:

RuntimeError: asyncio.run() cannot be called from a running event loop

Why this happens

FastMCP runs the MCP server on its own asyncio event loop. When a tool is invoked over the protocol, the loop calls your tool function. If the tool is async def, the loop awaits it directly. If the tool is sync def, FastMCP runs it inline on the loop.

asyncio.run() creates a new event loop and runs the coroutine on it. If a loop is already running in the current thread — which it is, because we’re inside FastMCP’s loop — asyncio.run() raises immediately. There’s no way to nest event loops in standard asyncio.

Your unit tests don’t catch this because they never invoke the tool through the protocol. They call the inner _query_api() function with await, in their own pytest-managed loop. The wrapper layer — the part FastMCP actually runs — is invisible to the tests.

The bug lives in the seam between layers. The seam is exactly the thing the unit tests don’t exercise.

The fix

Convert the tool to async def and await the inner work directly:

@mcp.tool()
async def get_account_status() -> dict:
    return await _query_api()

That’s it. FastMCP supports async def tools natively — when invoked, the loop awaits them, no extra ceremony. The sync wrapper was the entire problem.

If your tool genuinely needs to call sync code (e.g., it wraps sqlite3, imaplib, or a CPU-bound library), def is fine. FastMCP runs sync tools in a thread executor without breaking the loop. The bug class is specifically def + asyncio.run() in combination — using sync to drive async work via a nested loop. Don’t do that. If the work is async, the tool is async.

When not to make tools async

This is a place where convention can mislead you. If every tool in your server does HTTP work, every tool should be async def — and a project-level test asserting inspect.iscoroutinefunction(tool) for each registered tool is a cheap guardrail.

But if your server wraps sqlite3 (sync), imaplib (sync), or pure-Python data manipulation (sync), making the tool async def for stylistic consistency forces you into one of two suboptimal places: either asyncio.to_thread(...) to push the sync work off the loop (real but verbose), or pretending it’s async when it isn’t (which causes loop blocking under concurrent load).

Sync @mcp.tool() is correct when the work itself is sync. The lint we’ll discuss in a moment doesn’t ban sync tools — it bans the anti-pattern of sync wrapping async work via asyncio.run().

Why mocked tests gave us false confidence

The 42 tests we shipped exercised the inner layer:

# tests/test_reader.py
async def test_get_account_status(httpx_mock):
    httpx_mock.add_response(...)
    result = await query_api(...)
    assert result["handle"] == "expected"

Note await query_api(...). The test imports the inner async function and awaits it directly, in a pytest-asyncio-managed loop. The @mcp.tool() decorator is never invoked. The asyncio.run() inside the wrapper never runs.

To the unit tests, the codebase looked like a working async client with a thin wrapper around it. The wrapper was opaque. The bug was in the opaque part.

The general lesson: mocked tests prove your code parses responses correctly. They don’t prove your code runs under the runtime that will actually run it. For MCP servers, the runtime is FastMCP’s event loop, with its specific reentrancy rules. Until your tests exercise that runtime, you don’t know it works.

How to test the wrapper layer

Two complementary approaches.

Static check: assert tool shape. For projects where every tool does async work, write a test that introspects each registered tool and asserts it’s a coroutine function:

# tests/test_server_wrappers.py
import inspect
import pytest
from mcp_twitter import server

TOOL_NAMES = ["check_account_status", "list_mentions", "read_tweet", "search_recent"]


@pytest.mark.parametrize("name", TOOL_NAMES)
def test_tool_is_coroutine_function(name):
    func = getattr(server, name)
    assert inspect.iscoroutinefunction(func), (
        f"server.{name} is not async def. FastMCP runs tools inside its "
        f"event loop; sync tools doing async work via asyncio.run() will "
        f"raise RuntimeError on first invocation."
    )

This catches a regression at parse time. If someone “fixes” a future tool by making it sync + asyncio.run(), CI fails before the buggy code can ship. No live network, no API cost.

AST check: ban the specific anti-pattern. Even better, walk the AST of server.py and reject any @mcp.tool()-decorated function that contains asyncio.run():

import ast

def calls_asyncio_run(node):
    for sub in ast.walk(node):
        if (isinstance(sub, ast.Call)
            and isinstance(sub.func, ast.Attribute)
            and sub.func.attr == "run"
            and isinstance(sub.func.value, ast.Name)
            and sub.func.value.id == "asyncio"):
            return True
    return False


def test_no_asyncio_run_inside_mcp_tool():
    tree = ast.parse(open("src/my_server/server.py").read())
    offenders = []
    for node in ast.walk(tree):
        if not isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
            continue
        if not any(isinstance(d, ast.Call)
                   and isinstance(d.func, ast.Attribute)
                   and d.func.attr == "tool"
                   for d in node.decorator_list):
            continue
        if calls_asyncio_run(node):
            offenders.append(node.name)
    assert not offenders, f"@mcp.tool() with asyncio.run(): {offenders}"

This is more surgical than the coroutine check — it catches the specific bug class regardless of whether the tool is def or async def (defense in depth: nested asyncio.run() is wrong even inside an async function).

The cross-project lint

We open-sourced the AST check as a standalone tool that scans any directory of FastMCP servers. It’s at:

https://github.com/Alienbushman/mcpdone-samples/tree/master/mcp-guardrails

Usage:

# Scan all *server*.py under the current directory
python check_mcp_tool_async.py

# Scan specific files (pre-commit hook style)
python check_mcp_tool_async.py path/to/server.py

# Demo: catch the bug in the included fixture
python check_mcp_tool_async.py test_fixtures/buggy_server.py
# → exit code 1, points to the offending line

It’s MIT-licensed, dependency-free (Python stdlib only — ast does the heavy lifting), and works against any FastMCP project. Wire it as a pre-commit hook or CI step and the bug class can’t reach master.

The repo also contains test_fixtures/buggy_server.py — a minimal FastMCP-shaped file with both the broken def + asyncio.run() form and the fixed async def + await form, side by side. Useful as a copy-paste reference.

The broader principle: test where the bugs live

Wrapper-layer bugs are a specific instance of a more general problem: bugs live in the seams between layers, and tests that cover only the layers themselves can miss every seam-bug at once.

For an MCP server, the seams that matter:

Each seam needs at least one test that exercises it. Not via mocks of the inner layer — through the runtime. For FastMCP specifically, that means at least one test per tool that goes through mcp.tool() registration and gets invoked the way the protocol invokes it. A live smoke test against a known-good input is sufficient; you don’t need fancy property-based testing.

A good rule of thumb we’ve adopted: for every tool, write one test that goes all the way through the wrapper. It costs maybe 30 lines of test code per server, and it catches the entire class of seam-bugs that mocked unit tests will miss.

What we changed in our SOP

This bug shipped despite our internal SOP requiring high test coverage. The SOP didn’t distinguish “test coverage of the inner layer” from “test coverage of the wrapper layer,” and the internal review pass — which we always do before delivery — didn’t think to look at the seam. After this incident, we added a shared principle to the service-delivery SOP that applies to every MCP server we ship:

MCP wrapper-layer is tested independently of the inner logic. Unit tests against mocked clients prove the inner functions parse responses correctly — they do NOT prove that the FastMCP @mcp.tool() decorations actually run under the protocol. Wrapper-layer bugs (sync def calling asyncio.run() is the classic) pass every unit test and break at first real call. Every shipped MCP server must have (a) a test asserting all @mcp.tool() functions are correctly shaped — for tools that do async work, that means inspect.iscoroutinefunction returns True — and (b) a live smoke run that invokes each tool through the FastMCP wrapper at least once.

We now also run the cross-project lint hook at delivery time on every customer MCP. It takes 200ms to run across four MCP servers; the cost-benefit is absurdly favourable.

The thing I wish I’d read 6 days ago

If you’re shipping a FastMCP server right now, three concrete moves:

  1. Run check_mcp_tool_async.py against your server.py. If you have the bug, it tells you exactly which line and how to fix it. 30 seconds.
  2. Add at least one test per tool that goes through the FastMCP wrapper. Not mocks of the inner function — through the decorator, through the registration. Catches the entire seam-bug class.
  3. Read your existing tests and ask: are these proving the inner logic, or are they proving the runtime? If the answer is “the inner logic,” the runtime is your blind spot. The fix is one parametrized test per tool that asserts shape; it’s cheap.

The bug we shipped cost us about an hour to diagnose and fix once we noticed. The hour we lost was small. The four days the bug shipped before we noticed — because we hit it on the first MCP call after a public launch announcement — were the expensive part.

Don’t ship that bug.


The lint hook and fixtures are MIT-licensed at github.com/Alienbushman/mcpdone-samples/tree/master/mcp-guardrails. The full mcp-twitter source — including the tests/test_server_wrappers.py shape we ended up with — is part of an internal repository, but happy to talk through the patterns if you’re stuck on a similar problem. hello@mcpdone.com.

If your team wants someone to build production-shape MCP servers without you having to learn this kind of bug class the hard way — that’s literally the $499 Build tier. Money-back if the code doesn’t run in a clean environment.

Want something similar for your team? See the Build tier — custom MCP servers, shipped in 5 days, fixed price.