Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
On NextJS with
@vercel/otel? You can skip the Trodo SDK install entirely and use the env-var-only OTLP path — see OpenTelemetry / OTLP. The Vercel AI metadata mapping (userId / sessionId / agentName plus custom keys) lands in the same dashboard fields as the SDK path. The SDK install below remains the right choice when you want wrapAgent boundaries, feedback, or trackMcp.What auto-instruments
The Vercel AI SDK does not use module-level monkey-patching; it emits OTel spans only when you explicitly enable telemetry per-call. Every call inside awrapAgent with telemetry enabled becomes a span.
| Call | Span kind | Auto-extracted |
|---|---|---|
generateText | llm | model, provider, tokens, prompt, completion |
streamText | llm | Same, accumulated |
generateObject / streamObject | llm | model, tokens, schema, parsed object |
tool() + tool-call loop | tool | tool name, input, output |
embed / embedMany | llm | model, tokens |
openai, anthropic, google, mistral, …).
Install
ai SDK includes its own OTel integration.
Minimum example
experimental_telemetry: { isEnabled: true } flag is required — without it, no spans are emitted. The Trodo SDK already has an OTel tracer registered, so the AI SDK finds it and pipes spans through automatically.
Rich metadata
functionId→ span namemetadata→ merged into the span’s attributes, searchable in the dashboard.
Tool calls
Everytool({ execute }) becomes a tool span with tool_name = the key you used in the tools object. The waterfall shows the alternating pattern with the LLM-returned toolCalls flowing into your execute handler.
Streaming
text / usage promise — otherwise the span closes before tokens are recorded.
Auto vs manual cheat-table
| Operation | Auto? | Notes |
|---|---|---|
generateText | yes | Requires experimental_telemetry: { isEnabled: true } |
streamText | yes | Same; await final text for token extraction |
generateObject / streamObject | yes | Parsed object goes on span.output |
tool({ execute }) | yes | kind='tool', name from object key |
embed / embedMany | yes | — |
| Custom provider adapter | yes | As long as it returns standard usage |
Gotchas
- If you forget
experimental_telemetry, everything still works — except the span is just never emitted. Add a lint rule or wrap in a helper. - The AI SDK records
promptandcompletionby default — if these are sensitive, disable per-call withexperimental_telemetry: { isEnabled: true, recordInputs: false, recordOutputs: false }. - Tool name collisions across multiple
generateTextcalls in one run are fine — each produces its own span.