Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What it is

Agent observability is how you understand what your AI agents actually do at runtime. When a user triggers an agent, a lot happens beneath the surface: prompts are assembled, LLM calls are made, tools are invoked, results are processed, and a response is returned. Any of those steps can be slow, expensive, or wrong. Without visibility into that execution, debugging is guesswork — you see the outcome but not the path. Trodo gives you the full path, for every run, in a searchable trace.

What a trace looks like

Every agent execution becomes a structured tree:
run: support-agent                                     kind: run
├─ llm openai.chat.completions                         kind: llm
│   prompt · completion · tokens · cost · model
├─ tool lookup_order                                   kind: tool
│   input · output · toolName
├─ retrieval vector-search                             kind: retrieval
│   query · topK · results
└─ llm openai.chat.completions                         kind: llm
    second call in the same run
The root is the run — one execution of your agent from start to finish. Everything inside it — LLM calls, tool invocations, retrieval steps — is a child span with its own timing, inputs, outputs, and status.

What you get automatically

Wrap your agent once:
const result = await wrapAgent('support-bot', async (run) => {
  run.setInput({ question });
  const answer = await generateText({ model: openai('gpt-4o'), prompt: question });
  run.setOutput({ answer: answer.text });
  return answer.text;
}, { distinctId: userId, conversationId: threadId });
And every execution captures automatically:
WhatHow
LLM callsModel, prompt, completion, input/output tokens, cost, temperature
Tool invocationsTool name, arguments, return value
Retrieval stepsQuery, results, top-K
TimingDuration of every span and the full run
ErrorsStack trace, error type, which span threw
Token rollupsTotal tokens in/out and total cost, summed across all LLM spans
For supported providers — OpenAI, Anthropic, LangChain, LlamaIndex, Vercel AI SDK, Bedrock, Cohere, Gemini, Mistral — auto-instrumentation captures child spans with zero extra code.

What you can answer

Why did this run fail? Click into any failed run and walk the span tree. See which LLM call threw, what the prompt was, what the model returned before the error. Why is this agent slow? The waterfall view shows every span stacked in time. The long bar is your bottleneck — a slow retrieval step, a chained LLM call, a tool that times out. How much does this agent cost to run? Total token spend and USD cost are rolled up on every run. Filter by agent name, user, date range, or model to see cost trends. Are users happy with the output? Attach explicit feedback to runs after the fact — thumbs up/down, rating, or freeform notes. Correlate it with run characteristics. Which users are hitting errors? Filter runs by status = error and distinctId — every run is tied to the user who triggered it, so you can open their full event timeline alongside the trace.

Supported frameworks

Auto-instrumentation works out of the box with:
  • OpenAI SDK (Node + Python)
  • Anthropic SDK (Node + Python)
  • Vercel AI SDK
  • LangChain (JS + Python)
  • LlamaIndex
  • Amazon Bedrock
  • Google Gemini / Vertex AI
  • Mistral
  • Cohere
  • Raw HTTP via fetch / httpx / requests

Multi-agent and cross-service tracing

Trodo propagates run context automatically within a process (via AsyncLocalStorage in Node, contextvars in Python). For cross-service scenarios — agent A calls agent B over HTTP — pass propagationHeaders() on the outbound request and the receiving service’s middleware stitches the spans together into one trace.

Next steps

Quickstart

Send your first trace in under 2 minutes.

Concepts

Runs, spans, kinds, propagation — the full mental model.

Integrations

Per-framework setup and auto-capture tables.

Tracing reference

Every option on wrapAgent and the span helpers.