Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Trodo accepts standard OTLP traces directly. If your app already emits OpenTelemetry (@vercel/otel, @opentelemetry/sdk-node, dd-trace forwarders, FastAPI auto-instrumentation, …), you can ship traces to Trodo with two env vars and zero Trodo SDK code. The Bearer token is your site_id — same value you’d pass to trodo.init({ siteId }). Get it from Integration Manager.

Pick your path

Path A — Zero-SDK

NextJS + Vercel AI SDK with @vercel/otel. Set two env vars, get full traces. No Trodo SDK install required.

Path B — Coexistence

Already shipping to Datadog/Jaeger/Honeycomb? Add Trodo as an additional destination via registerOTel({ mode: 'otlp' }).

Endpoint

POST https://sdkapi.trodo.ai/v1/traces
Both application/json and application/x-protobuf bodies are accepted. The legacy alias /api/sdk/otel/v1/traces still works for anything already pointed at it. Auth — pick either:
  • Authorization: Bearer <site_id> (preferred — OTel-canonical)
  • X-Trodo-Site-Id: <site_id> (legacy)
Per-site rate limit: 600 req/min. A 512-span batch every 100ms fits comfortably.

Path A — env-var only

For NextJS + Vercel AI SDK projects with @vercel/otel. No Trodo SDK install.

1. Install @vercel/otel

npm install @vercel/otel @opentelemetry/api

2. Wire instrumentation.ts

Create at the project root (or src/). NextJS picks it up automatically.
// instrumentation.ts
import { registerOTel } from '@vercel/otel';

export function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    registerOTel({ serviceName: 'support-bot' });
  }
}

3. Set the OTLP env vars

# .env.local
TRODO_SITE_ID=your-site-id-here
OTEL_EXPORTER_OTLP_ENDPOINT=https://sdkapi.trodo.ai
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer ${TRODO_SITE_ID}

4. Pass metadata on every AI call

import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const result = await generateText({
  model: openai('gpt-4o'),
  messages,
  tools: {
    search_docs: tool({
      description: 'Search the help docs',
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => ({ results: [] }),
    }),
  },
  experimental_telemetry: {
    isEnabled: true,
    metadata: {
      userId: session.user.id,        // → distinct_id
      sessionId: chatId,              // → conversation_id
      agentName: 'support_chat',      // → agent_name
      // Anything else flows into run.metadata JSONB
      experimentId: 'v3-prompt',
      tier: 'enterprise',
    },
  },
});
experimental_telemetry.isEnabled: true is required on every Vercel AI call — without it, no spans are emitted.

Attribute mapping

Vercel AI attributeTrodo field
ai.telemetry.metadata.userIdrun.distinct_id
ai.telemetry.metadata.sessionIdrun.conversation_id
ai.telemetry.metadata.agentNamerun.agent_name
ai.telemetry.metadata.<custom>run.metadata.<custom>
ai.usage.promptTokens / completionTokensspan.input_tokens / output_tokens
ai.model.id / ai.model.providerspan.model / span.provider
ai.prompt / ai.response.textspan.input / span.output
Span name ai.toolCallspan.kind = 'tool' with tool_name from ai.toolCall.name
Span name ai.generateText / ai.streamText / ai.embed / …span.kind = 'llm'
For anyone setting OTel resource attributes directly, the equivalent trodo.* keys also work:
Resource attributeTrodo field
trodo.distinct_idrun.distinct_id
trodo.conversation_idrun.conversation_id
trodo.agent_namerun.agent_name
trodo.metadata.<custom>run.metadata.<custom>

Path B — existing OTel pipeline

Already shipping traces to Datadog, Jaeger, Honeycomb, or any other OTel destination? Add Trodo as a side-by-side destination — no rip-and-replace. Requires trodo-node ≥ 2.4.0 (or trodo-python ≥ 2.4.0).
npm install trodo-node
npm install @opentelemetry/api @opentelemetry/sdk-node \
  @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto \
  @opentelemetry/resources
// instrumentation.ts (or app bootstrap, AFTER your existing OTel provider is registered)
import { registerOTel } from 'trodo-node';

registerOTel({
  siteId: process.env.TRODO_SITE_ID!,
  mode: 'otlp',           // attach Trodo OTLP exporter to existing provider
  serviceName: 'support-bot',
});
In mode: 'otlp':
  • wrapAgent / withSpan / tool / trace / llm / retrieval route through the OTel tracer, so auto-instrumented children (Anthropic, OpenAI, LangChain, Vercel AI) join the same OTel trace via context propagation. The backend OTLP controller groups the whole tree into one Trodo run.
  • trackMcp, feedback, startRun / endRun / joinRun continue to use their own HTTP API endpoints — they have nothing to gain from OTel routing.
If you want to keep wrapAgent going through Trodo’s HTTP API (default behavior, identical to 2.3.x), don’t change anything — mode: 'trodo' is the default and existing init() callers see zero behavior change.

What “one prompt = one run” looks like

OTel auto-creates a traceId per inbound request. Everything inside that request — generateText, the tool calls it spawns, sub-generateText calls inside tools, retrievals — shares the same traceId. Trodo groups by traceId, picks the parentless span as the run, and links the rest as children via parent_span_id.
trace abc123
└── ai.generateText                  ← run (parentless)
    ├── ai.toolCall: search_docs     ← span, parent = generateText
    │   └── ai.generateText          ← span, parent = toolCall
    ├── ai.toolCall: lookup_user
    └── ai.toolCall: book_meeting
All five become rows in one run. The parent_span_id chain is preserved exactly. Multi-turn chats (each user message is its own request) get their own traceId and become separate runs, linked by conversation_id — same as wrapAgent semantics.

Custom metadata

Anything in ai.telemetry.metadata.* (Vercel AI) or trodo.metadata.* (any OTel SDK) beyond the well-known three keys flows into run.metadata JSONB — full parity with wrapAgent({ metadata }).
metadata: {
  userId: 'user-42',           // first-class
  sessionId: 'chat-abc',       // first-class
  agentName: 'support_chat',   // first-class
  // Custom — visible in dashboard run drawer, queryable via filters
  experimentId: 'v3-prompt',
  tier: 'enterprise',
  featureFlag: 'new-tools-enabled',
}

Coexistence with Datadog / Jaeger / Honeycomb

mode: 'otlp' attaches the Trodo OTLP exporter to the existing TracerProvider rather than replacing it. Both your current backend AND Trodo receive every span. No data loss in your existing pipeline; no rip-and-replace.

When NOT to use the OTLP path

  • MCP servers — use trackMcp / track_mcp instead. MCP servers proxy tool calls but never see the user’s prompt or the LLM’s final answer, so a “run” wrapping an MCP session has nothing meaningful in input / output. The runless-span path is the right primitive there. See Track MCP.
  • Greenfield projects with no OTel yet — the SDK’s default init() (mode 'trodo') is simpler. The OTLP path’s value is leveraging an OTel pipeline you already have.

Troubleshooting

Traces not showing up?
  1. Confirm experimental_telemetry: { isEnabled: true } is set on every Vercel AI call.
  2. Confirm OTEL_EXPORTER_OTLP_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS are set in the runtime environment (not just .env). On Vercel, set them in the project’s Environment Variables UI.
  3. Check the OTel exporter logs — most exporters log to stderr on send failure. A 401 means the Bearer token isn’t a valid site_id.
  4. Hit one request, then wait ~5 seconds (OTel batches spans). Refresh the dashboard.
distinct_id showing as null? You’re likely not passing metadata.userId (or the equivalent trodo.distinct_id resource attribute). Without that the run has no user attribution. Traces split across multiple runs? Multiple HTTP requests inside one logical conversation each get their own traceId → their own runs. Use metadata.sessionId (→ conversation_id) to link them in the dashboard. mode: 'otlp' requires … Friendly install hint when peer deps are missing. See the install commands above for the package list.