Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

On NextJS with @vercel/otel? You can skip the Trodo SDK install entirely and use the env-var-only OTLP path — see OpenTelemetry / OTLP. The Vercel AI metadata mapping (userId / sessionId / agentName plus custom keys) lands in the same dashboard fields as the SDK path. The SDK install below remains the right choice when you want wrapAgent boundaries, feedback, or trackMcp.

What auto-instruments

The Vercel AI SDK does not use module-level monkey-patching; it emits OTel spans only when you explicitly enable telemetry per-call. Every call inside a wrapAgent with telemetry enabled becomes a span.
CallSpan kindAuto-extracted
generateTextllmmodel, provider, tokens, prompt, completion
streamTextllmSame, accumulated
generateObject / streamObjectllmmodel, tokens, schema, parsed object
tool() + tool-call looptooltool name, input, output
embed / embedManyllmmodel, tokens
Provider field is whatever provider adapter you use (openai, anthropic, google, mistral, …).

Install

npm install ai @ai-sdk/openai
No OTel package needed — the ai SDK includes its own OTel integration.

Minimum example

import trodo from 'trodo-node';
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

trodo.init({ siteId: process.env.TRODO_SITE_ID });

await trodo.wrapAgent('ai-sdk-bot', async (run) => {
  const { text, toolCalls } = await generateText({
    model: openai('gpt-4o-mini'),
    prompt: 'What is the weather in SF?',
    tools: {
      get_weather: tool({
        description: 'Current weather in a city',
        parameters: z.object({ city: z.string() }),
        execute: async ({ city }) => `Sunny in ${city}, 72F`,
      }),
    },
    experimental_telemetry: { isEnabled: true },  // required
  });
  run.setOutput(text);
});
The experimental_telemetry: { isEnabled: true } flag is required — without it, no spans are emitted. The Trodo SDK already has an OTel tracer registered, so the AI SDK finds it and pipes spans through automatically.

Rich metadata

await generateText({
  model: openai('gpt-4o-mini'),
  prompt,
  experimental_telemetry: {
    isEnabled: true,
    functionId: 'summarize-support-ticket',
    metadata: { tenant: 'acme', plan: 'pro' },
  },
});
  • functionId → span name
  • metadata → merged into the span’s attributes, searchable in the dashboard.

Tool calls

Every tool({ execute }) becomes a tool span with tool_name = the key you used in the tools object. The waterfall shows the alternating pattern with the LLM-returned toolCalls flowing into your execute handler.

Streaming

const result = await streamText({
  model: openai('gpt-4o-mini'),
  prompt,
  experimental_telemetry: { isEnabled: true },
});

for await (const delta of result.textStream) process.stdout.write(delta);
await result.text; // await this so telemetry flushes
Make sure to await the final text / usage promise — otherwise the span closes before tokens are recorded.

Auto vs manual cheat-table

OperationAuto?Notes
generateTextyesRequires experimental_telemetry: { isEnabled: true }
streamTextyesSame; await final text for token extraction
generateObject / streamObjectyesParsed object goes on span.output
tool({ execute })yeskind='tool', name from object key
embed / embedManyyes
Custom provider adapteryesAs long as it returns standard usage

Gotchas

  • If you forget experimental_telemetry, everything still works — except the span is just never emitted. Add a lint rule or wrap in a helper.
  • The AI SDK records prompt and completion by default — if these are sensitive, disable per-call with experimental_telemetry: { isEnabled: true, recordInputs: false, recordOutputs: false }.
  • Tool name collisions across multiple generateText calls in one run are fine — each produces its own span.