Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What auto-instruments

Install @opentelemetry/instrumentation-openai (Node) or opentelemetry-instrumentation-openai (Python) alongside the openai SDK, and every call becomes a span with model, provider, tokens, temperature, prompt, and completion.
CallSpan kindAuto-extracted
client.chat.completions.createllmmodel, input/output tokens, temperature, messages, completion
client.completions.create (legacy)llmmodel, tokens, prompt, completion
client.responses.createllmmodel, tokens, input, output text
client.embeddings.createllmmodel, input tokens, input text
Streaming (stream: true)llmEverything above, tokens from the final chunk.usage
Provider field is openai. For Azure OpenAI + OpenAI-compatible endpoints (Together, Groq, Llama API), the same instrumentation fires and fills provider='openai' — set metadata.provider on the run if you want a distinct label.

Install

npm install openai @opentelemetry/instrumentation-openai
pip install openai opentelemetry-instrumentation-openai

Minimum example

import trodo from 'trodo-node';
import OpenAI from 'openai';

trodo.init({ siteId: process.env.TRODO_SITE_ID });
const openai = new OpenAI();

await trodo.wrapAgent('faq-bot', async (run) => {
  run.setInput({ q: 'What is HNSW?' });
  const r = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: 'What is HNSW?' }],
  });
  run.setOutput(r.choices[0].message.content);
  // No additional code. The call above produced one kind='llm' span.
});

Tool use

The OpenAI tool-calling API produces one llm span per completion. The tool execution itself is your code — wrap it as a tool span:
const billing = await trodo.withSpan(
  'fetch_billing',
  (s) => { s.setTool('fetch_billing'); return fetchBilling(args); },
  { kind: 'tool', input: args },
);
This gives the classic tool-use waterfall: llmtoolllmtoolllm (final).

Auto vs manual cheat-table

OperationAuto?If manual, use
chat.completions.createyes
responses.createyes
embeddings.createyes
images.generatenowithSpan({ kind: 'generic' }) + setAttribute for size/quality
audio.transcriptions.createnowithSpan({ kind: 'tool' }) with setInput(file.name)
moderations.createnowithSpan({ kind: 'tool' })
Fine-tuning jobsnoUsually out of scope for runtime tracing; record via track() if needed
Assistants / threads (beta)partialThread run spans appear, but tool-call semantics differ — wrap your tool impls manually

Streaming

The instrumentation accumulates delta chunks and records tokens from the final chunk’s usage field (OpenAI added this in late 2024 — make sure you’re on openai-node ≥ 4.52 / openai-python ≥ 1.35). Older clients produce a span with no tokens; set stream_options: { include_usage: true } to force usage emission.

Gotchas

  • The instrumentation patches the module prototype. If you import OpenAI before trodo.init, the patch may miss it. Init first, then import.
  • provider is openai even against Azure or OpenAI-compatible proxies. Use run.setMetadata({ actual_provider: 'azure' }) or set the base_url as a span attribute if you need to distinguish them.
  • Embeddings don’t carry a temperature — the field just doesn’t render on the span.