Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
What it is
Agent observability is how you understand what your AI agents actually do at runtime. When a user triggers an agent, a lot happens beneath the surface: prompts are assembled, LLM calls are made, tools are invoked, results are processed, and a response is returned. Any of those steps can be slow, expensive, or wrong. Without visibility into that execution, debugging is guesswork — you see the outcome but not the path. Trodo gives you the full path, for every run, in a searchable trace.What a trace looks like
Every agent execution becomes a structured tree:What you get automatically
Wrap your agent once:| What | How |
|---|---|
| LLM calls | Model, prompt, completion, input/output tokens, cost, temperature |
| Tool invocations | Tool name, arguments, return value |
| Retrieval steps | Query, results, top-K |
| Timing | Duration of every span and the full run |
| Errors | Stack trace, error type, which span threw |
| Token rollups | Total tokens in/out and total cost, summed across all LLM spans |
What you can answer
Why did this run fail? Click into any failed run and walk the span tree. See which LLM call threw, what the prompt was, what the model returned before the error. Why is this agent slow? The waterfall view shows every span stacked in time. The long bar is your bottleneck — a slow retrieval step, a chained LLM call, a tool that times out. How much does this agent cost to run? Total token spend and USD cost are rolled up on every run. Filter by agent name, user, date range, or model to see cost trends. Are users happy with the output? Attach explicit feedback to runs after the fact — thumbs up/down, rating, or freeform notes. Correlate it with run characteristics. Which users are hitting errors? Filter runs bystatus = error and distinctId — every run is tied to the user who triggered it, so you can open their full event timeline alongside the trace.
Supported frameworks
Auto-instrumentation works out of the box with:- OpenAI SDK (Node + Python)
- Anthropic SDK (Node + Python)
- Vercel AI SDK
- LangChain (JS + Python)
- LlamaIndex
- Amazon Bedrock
- Google Gemini / Vertex AI
- Mistral
- Cohere
- Raw HTTP via
fetch/httpx/requests
Multi-agent and cross-service tracing
Trodo propagates run context automatically within a process (viaAsyncLocalStorage in Node, contextvars in Python). For cross-service scenarios — agent A calls agent B over HTTP — pass propagationHeaders() on the outbound request and the receiving service’s middleware stitches the spans together into one trace.
Next steps
Quickstart
Send your first trace in under 2 minutes.
Concepts
Runs, spans, kinds, propagation — the full mental model.
Integrations
Per-framework setup and auto-capture tables.
Tracing reference
Every option on wrapAgent and the span helpers.