Trodo accepts standard OTLP traces directly. If your app already emits OpenTelemetry (Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
@vercel/otel, @opentelemetry/sdk-node, dd-trace forwarders, FastAPI auto-instrumentation, …), you can ship traces to Trodo with two env vars and zero Trodo SDK code.
The Bearer token is your site_id — same value you’d pass to trodo.init({ siteId }). Get it from Integration Manager.
Pick your path
Path A — Zero-SDK
NextJS + Vercel AI SDK with
@vercel/otel. Set two env vars, get full traces. No Trodo SDK install required.Path B — Coexistence
Already shipping to Datadog/Jaeger/Honeycomb? Add Trodo as an additional destination via
registerOTel({ mode: 'otlp' }).Endpoint
application/json and application/x-protobuf bodies are accepted. The legacy alias /api/sdk/otel/v1/traces still works for anything already pointed at it.
Auth — pick either:
Authorization: Bearer <site_id>(preferred — OTel-canonical)X-Trodo-Site-Id: <site_id>(legacy)
Path A — env-var only
For NextJS + Vercel AI SDK projects with@vercel/otel. No Trodo SDK install.
1. Install @vercel/otel
2. Wire instrumentation.ts
Create at the project root (or src/). NextJS picks it up automatically.
3. Set the OTLP env vars
4. Pass metadata on every AI call
experimental_telemetry.isEnabled: true is required on every Vercel AI call — without it, no spans are emitted.Attribute mapping
| Vercel AI attribute | Trodo field |
|---|---|
ai.telemetry.metadata.userId | run.distinct_id |
ai.telemetry.metadata.sessionId | run.conversation_id |
ai.telemetry.metadata.agentName | run.agent_name |
ai.telemetry.metadata.<custom> | run.metadata.<custom> |
ai.usage.promptTokens / completionTokens | span.input_tokens / output_tokens |
ai.model.id / ai.model.provider | span.model / span.provider |
ai.prompt / ai.response.text | span.input / span.output |
Span name ai.toolCall | span.kind = 'tool' with tool_name from ai.toolCall.name |
Span name ai.generateText / ai.streamText / ai.embed / … | span.kind = 'llm' |
trodo.* keys also work:
| Resource attribute | Trodo field |
|---|---|
trodo.distinct_id | run.distinct_id |
trodo.conversation_id | run.conversation_id |
trodo.agent_name | run.agent_name |
trodo.metadata.<custom> | run.metadata.<custom> |
Path B — existing OTel pipeline
Already shipping traces to Datadog, Jaeger, Honeycomb, or any other OTel destination? Add Trodo as a side-by-side destination — no rip-and-replace. Requirestrodo-node ≥ 2.4.0 (or trodo-python ≥ 2.4.0).
- Node.js
- Python
mode: 'otlp':
wrapAgent/withSpan/tool/trace/llm/retrievalroute through the OTel tracer, so auto-instrumented children (Anthropic, OpenAI, LangChain, Vercel AI) join the same OTel trace via context propagation. The backend OTLP controller groups the whole tree into one Trodo run.trackMcp,feedback,startRun/endRun/joinRuncontinue to use their own HTTP API endpoints — they have nothing to gain from OTel routing.
wrapAgent going through Trodo’s HTTP API (default behavior, identical to 2.3.x), don’t change anything — mode: 'trodo' is the default and existing init() callers see zero behavior change.
What “one prompt = one run” looks like
OTel auto-creates atraceId per inbound request. Everything inside that request — generateText, the tool calls it spawns, sub-generateText calls inside tools, retrievals — shares the same traceId. Trodo groups by traceId, picks the parentless span as the run, and links the rest as children via parent_span_id.
parent_span_id chain is preserved exactly.
Multi-turn chats (each user message is its own request) get their own traceId and become separate runs, linked by conversation_id — same as wrapAgent semantics.
Custom metadata
Anything inai.telemetry.metadata.* (Vercel AI) or trodo.metadata.* (any OTel SDK) beyond the well-known three keys flows into run.metadata JSONB — full parity with wrapAgent({ metadata }).
Coexistence with Datadog / Jaeger / Honeycomb
mode: 'otlp' attaches the Trodo OTLP exporter to the existing TracerProvider rather than replacing it. Both your current backend AND Trodo receive every span. No data loss in your existing pipeline; no rip-and-replace.
When NOT to use the OTLP path
- MCP servers — use
trackMcp/track_mcpinstead. MCP servers proxy tool calls but never see the user’s prompt or the LLM’s final answer, so a “run” wrapping an MCP session has nothing meaningful ininput/output. The runless-span path is the right primitive there. See Track MCP. - Greenfield projects with no OTel yet — the SDK’s default
init()(mode'trodo') is simpler. The OTLP path’s value is leveraging an OTel pipeline you already have.
Troubleshooting
Traces not showing up?- Confirm
experimental_telemetry: { isEnabled: true }is set on every Vercel AI call. - Confirm
OTEL_EXPORTER_OTLP_ENDPOINTandOTEL_EXPORTER_OTLP_HEADERSare set in the runtime environment (not just.env). On Vercel, set them in the project’s Environment Variables UI. - Check the OTel exporter logs — most exporters log to stderr on send failure. A 401 means the Bearer token isn’t a valid
site_id. - Hit one request, then wait ~5 seconds (OTel batches spans). Refresh the dashboard.
distinct_id showing as null?
You’re likely not passing metadata.userId (or the equivalent trodo.distinct_id resource attribute). Without that the run has no user attribution.
Traces split across multiple runs?
Multiple HTTP requests inside one logical conversation each get their own traceId → their own runs. Use metadata.sessionId (→ conversation_id) to link them in the dashboard.
mode: 'otlp' requires …
Friendly install hint when peer deps are missing. See the install commands above for the package list.