Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What auto-instruments

Install @opentelemetry/instrumentation-http and/or @opentelemetry/instrumentation-fetch. Every outbound HTTP request inside wrapAgent becomes a generic span with method, URL, status, and duration.
LayerSpan kindCaptured
Node http / httpsgenericmethod, host, path, status, duration
Node global fetch (18+)genericSame
Python requests / httpxgenericSame (via opentelemetry-instrumentation-requests / -httpx)
Outbound HTTP is noisy — these instrumentations cover every outbound call, not just LLM traffic. Most users disable them and rely on framework-specific instrumentations (which already wrap the LLM HTTP layer):
trodo.init({
  siteId,
  disableInstrumentations: ['http', 'fetch'],
});

When you actually want HTTP spans

  • Custom LLM provider with no upstream instrumentation (local Ollama, vLLM, a homegrown inference server).
  • Tools that hit external APIs you want visible (weather, billing, CRM).
  • Cross-service propagation — the outbound request that carries X-Trodo-Run-Id shows up as a span on the caller side automatically.

Example — manual LLM span over raw fetch

Auto HTTP spans record URL + status but no tokens. To get tokens/cost, call trodo.trackLlmCall after the request:
await trodo.wrapAgent('raw-llm', async () => {
  const body = { model: 'llama3.1:70b', messages: [...] };
  const resp = await fetch('http://ollama.internal/api/chat', {
    method: 'POST',
    body: JSON.stringify(body),
  }).then((r) => r.json());

  await trodo.trackLlmCall({
    model: resp.model,
    provider: 'ollama',
    inputTokens: resp.prompt_eval_count,
    outputTokens: resp.eval_count,
    prompt: body,
    completion: resp,
  });

  return resp.message.content;
});
You now have two spans for the same request: the generic HTTP span from the auto-instrumentation, and the llm span from trackLlmCall. Disable HTTP auto-instrumentation if you’d rather keep the waterfall clean.

Example — tool that hits an external API

const fetchBilling = trodo.tool('fetch_billing', async (userId) => {
  const r = await fetch(`${BILLING_URL}/users/${userId}`, {
    headers: trodo.propagationHeaders(), // carry run context downstream
  });
  return r.json();
});
With HTTP auto-instrumentation enabled, you’ll see both the tool span (wrapping) and the generic HTTP span (the underlying call). Without auto-instrumentation, you’ll just see the tool span, which is usually what you want.

Cross-service propagation

propagationHeaders() returns X-Trodo-Run-Id + X-Trodo-Parent-Span-Id for the current context. Attach them to outbound requests so the downstream service can joinRun under your run instead of opening its own:
await fetch(downstreamUrl, {
  method: 'POST',
  headers: { 'content-type': 'application/json', ...trodo.propagationHeaders() },
  body: JSON.stringify(payload),
});
See cross-service for the full recipe.

Auto vs manual cheat-table

ScenarioAuto span?Gets tokens?Recommended
OpenAI / Anthropic / etc. (auto-instrumented)yesyesJust install the framework’s OTel package
Ollama / vLLM / custom inferenceyes (URL only)notrackLlmCall after the request
External tool API (weather, CRM)yesn/aWrap the caller in trodo.tool for a named span; disable HTTP noise
Cross-service RPCyesn/apropagationHeaders() on caller, middleware on callee

Gotchas

  • HTTP instrumentation captures authorization headers by default — configure the underlying OTel package to redact them if you ship spans off-box.
  • fetch instrumentation in Node 18+ requires a recent @opentelemetry/instrumentation-fetch that supports the undici-backed global fetch; older versions only patch browser fetch.
  • Disabling HTTP instrumentation does not disable cross-service propagation — propagationHeaders() works independently because it reads context directly.