Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Wrap your agent function, and every execution becomes a trace — a structured tree of timing, inputs, outputs, LLM calls, and tool invocations you can search, filter, and debug in the Trodo dashboard.

Quickstart

Send your first trace in under 2 minutes.

How it works

You add 3 things to your code:
  1. Import the Trodo package for your stack.
  2. Initialise with your site ID.
  3. wrapAgent — the root that tracks everything agent-related under it.
Everything inside the wrapped function — LLM calls, tool executions, retrieval steps — automatically nests as child spans under that root. The wrapper captures your return value as the run output and closes the span when the function returns.
import trodo, { wrapAgent } from 'trodo-node';
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

trodo.init({ siteId: process.env.TRODO_SITE_ID });

const myAgent = (userId: string, threadId: string, input: string) =>
  wrapAgent('support-bot', async (run) => {
    run.setInput({ question: input });
    const result = await generateText({ model: openai('gpt-4o'), prompt: input });
    run.setOutput({ answer: result.text });
    return result.text;
  }, { distinctId: userId, conversationId: threadId });

const { result, runId } = await myAgent('user-42', 'thread-abc', 'What is the capital of France?');

What you can do with Trodo

  • Trace agents — see the full execution tree for every run: which LLM calls were made, what tools were invoked, how long each step took, and where errors occurred.
  • Debug failures — click into any run to inspect inputs, outputs, and errors at every level of the span tree. Filter by user, session, environment, or custom attributes.
  • Monitor production — set up monitors on metrics like latency, error rate, or tool-call success rate. When something breaks, Trodo creates an incident with root-cause analysis and notifies you.
  • Connect your IDE — use the Trodo MCP server to query traces directly from Cursor, Claude Desktop, or Claude Code without switching to the dashboard.

Choose your path

My framework has built-in OTel support

Start here — child spans emit automatically with no extra code.

I want per-call LLM visibility

Start here — get child spans with prompts, completions, and token counts for every LLM call.

I need full control over every span

Start here — use typed helpers or the raw OpenTelemetry API to instrument anything.

Just want to copy-paste and go?

Browse Recipes — complete working examples for common patterns.