Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
track_mcp (Python) / trackMcp (Node) is the dedicated entry point for tracing an MCP server. It writes one runless span per tools/call — no parent agent run, no session/sweeper bookkeeping. Requires SDK version >= 2.3.0.
Use this for MCP servers only. If you’re tracing an agent that owns the conversation (chat, planner, RAG pipeline), use
wrapAgent. For websocket-pinned chats and scheduled jobs that bridge multiple HTTP requests, use startRun / endRun.Why MCP is different
The MCP server proxies tool calls but never sees the user’s prompt or the LLM’s final answer — those live inside the client (Claude.ai, Cursor, ChatGPT, Claude Desktop). A traditional Trodo Run wrapping an MCP session would have emptyinput / output and no signal for clustering. And MCP has no clean session-end signal in either transport (HTTP or stdio), so Runs end up stuck in running forever unless you bolt on a sweeper.
track_mcp bypasses all of that: each tool call is a self-contained span row, queryable by agent_name='MCP', distinct_id, and conversation_id (the Mcp-Session-Id).
Quick start
- Python
- Node.js
span_id if you want it for cross-system correlation; otherwise ignore the return value.
What auto-fills
| Field | Default behaviour |
|---|---|
span_id | UUID v4, returned by the function |
session_id / conversation_id | What you pass; if omitted, a fresh UUID per call |
agent_name | "MCP" (override via agent_name= / agentName: for custom tags) |
kind | "tool" |
name | "tool.<tool>" |
started_at | now() - duration_ms |
ended_at | now() |
status | "error" if error is set, else "ok" |
Parameters
The tool name. Becomes
name = "tool.<tool>".End-user attribution. Use the user’s email if your MCP auth flow gives you one (OAuth introspection typically does), else a stable user id. Runless spans without this are rejected by the backend with
400.The tool’s arguments. Truncated at 64 KB on the server side.
The tool’s full result payload. Pass the whole structured result — don’t pre-summarise.
If set, the span is marked
status="error" and this string is stored as error_message.Tool wall-clock duration. Defaults to 0 if omitted.
The
Mcp-Session-Id from the MCP client. Stored as conversation_id for grouping. If omitted, the SDK mints a fresh UUID per call.Which MCP client called you —
"anthropic", "cursor", "chatgpt", etc. Merged into attributes as mcp_client_label.Free-form extra attributes. Use this for filterable scalars like counts, status flags, or summary strings.
Override only if you need a custom tag (e.g.
"MCP_internal" for a separate internal MCP). Defaults to "MCP".Querying the data
In the dashboard, MCP traffic is grouped byagent_name='MCP' with run_id IS NULL. The hosted endpoint:
Latency considerations
By defaulttrack_mcp waits for the span POST to complete before returning. That’s ~50–200 ms of HTTP RTT added to your MCP response. If MCP latency matters more than guaranteed delivery, fire-and-forget:
- Python
- Node.js
Output capture rules
The same three rules fromwrapAgent apply:
- Await the full result before serialising. Never write a streaming handle into
output. - Output is the FULL payload, not a metadata summary. Pass the entire ToolResult; the SDK truncates at 64 KB.
- Use
attributesfor filterable scalars. Counts, status flags, the human summary string.
Raw-HTTP fallback
If you can’t take an SDK dependency:Common pitfalls
| Pitfall | Symptom | Fix |
|---|---|---|
Forgot distinct_id / distinctId | SDK throws before posting | Always populate it. Email > stable id > opaque token hash. |
Used wrapAgent or startRun for the MCP path | Run rows stuck in running forever; empty input/output | Switch to track_mcp / trackMcp. |
Pre-summarised output to save space | Dashboard shows {"count": 5} instead of the actual data | Pass the full payload. The SDK truncates at 64 KB on its own. |
| Awaited span POST in a high-throughput MCP server and felt latency | Each tool call adds ~50–200 ms RTT | Fire-and-forget (see above). |
Related
wrapAgent— for agents that own the conversation.startRun/endRun— for websocket-pinned chats and scheduled jobs.spans-external— runless spans from any source (the underlying mechanism).