Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Every call to wrapAgent becomes a run in your dashboard. Every LLM call, tool invocation, retrieval, and nested step inside that run becomes a span. Most spans are captured for free from the frameworks you already use — OpenAI, Anthropic, LangChain, LlamaIndex, Bedrock, Cohere, Gemini, Vertex, Mistral, Vercel AI SDK. You only write tracing code when you want something the auto-instrument doesn’t know about (a custom tool, a DB query, a raw fetch to an LLM).

The three primitives

PrimitivePage
wrapAgent — records one runwrapAgent
withSpan — records a nested step inside the runSpans
joinRun — emits spans from code outside the wrapAgent callback (worker, queue, separate service)Spans outside wrapAgent
Everything else — helpers (tool, llm, trace, retrieval), trackLlmCall, feedback, custom attributes, error semantics, token extraction — is a thin wrapper or sugar over those three. See Patterns.

Pipeline

wrapAgent(name, fn)
  ├─ generates run_id
  ├─ opens AsyncLocalStorage / contextvars context
  ├─ runs fn()
  │    ├─ OTel span from OpenAI/Anthropic/…   →  auto-captured llm span
  │    ├─ trodo.withSpan(...)                 →  manual span you add
  │    └─ fetch(url, { headers: propagationHeaders() })  →  span lands on the same run
  └─ flushes run + spans to https://sdkapi.trodo.ai

Once per process

import trodo from 'trodo-node';
trodo.init({ siteId: process.env.TRODO_SITE_ID });
After that, every wrapAgent anywhere in the process produces a run. Call once at boot; do not re-init per request.