Wrap your agent function, and every execution becomes a trace — a structured tree of timing, inputs, outputs, LLM calls, and tool invocations you can search, filter, and debug in the Trodo dashboard.Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Quickstart
Send your first trace in under 2 minutes.
How it works
You add 3 things to your code:- Import the Trodo package for your stack.
- Initialise with your site ID.
wrapAgent— the root that tracks everything agent-related under it.
- Node.js
- Python
What you can do with Trodo
- Trace agents — see the full execution tree for every run: which LLM calls were made, what tools were invoked, how long each step took, and where errors occurred.
- Debug failures — click into any run to inspect inputs, outputs, and errors at every level of the span tree. Filter by user, session, environment, or custom attributes.
- Monitor production — set up monitors on metrics like latency, error rate, or tool-call success rate. When something breaks, Trodo creates an incident with root-cause analysis and notifies you.
- Connect your IDE — use the Trodo MCP server to query traces directly from Cursor, Claude Desktop, or Claude Code without switching to the dashboard.
Choose your path
My framework has built-in OTel support
Start here — child spans emit automatically with no extra code.
I want per-call LLM visibility
Start here — get child spans with prompts, completions, and token counts for every LLM call.
I need full control over every span
Start here — use typed helpers or the raw OpenTelemetry API to instrument anything.
Just want to copy-paste and go?
Browse Recipes — complete working examples for common patterns.