Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What auto-instruments

Install @opentelemetry/instrumentation-anthropic (Node) or opentelemetry-instrumentation-anthropic (Python) and every Claude call inside wrapAgent becomes a span.
CallSpan kindAuto-extracted
client.messages.createllmmodel, input/output tokens, stop_reason, messages, completion
client.messages.streamllmSame, accumulated across deltas
client.messages.count_tokensllmmodel, input tokens
Tool use (server-side)llmTool definitions in prompt; actual tool runs are your code
Provider field is anthropic.

Install

npm install @anthropic-ai/sdk @opentelemetry/instrumentation-anthropic
pip install anthropic opentelemetry-instrumentation-anthropic

Minimum example

import trodo from 'trodo-node';
import Anthropic from '@anthropic-ai/sdk';

trodo.init({ siteId: process.env.TRODO_SITE_ID });
const anthropic = new Anthropic();

await trodo.wrapAgent('summarizer', async (run) => {
  run.setInput({ text });
  const r = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [{ role: 'user', content: `Summarize:\n${text}` }],
  });
  run.setOutput(r.content[0].type === 'text' ? r.content[0].text : '');
});

Tool-use loop

The classic Anthropic tool-use pattern: call, inspect stop_reason, run tools, append results, call again. Each messages.create is its own llm span. Wrap your tool executions so they appear as tool spans between the LLM spans:
while (true) {
  const r = await anthropic.messages.create({ model, messages, tools, max_tokens: 1024 });
  if (r.stop_reason !== 'tool_use') { finalAnswer = r; break; }

  const toolUse = r.content.find((c) => c.type === 'tool_use');
  const result = await trodo.withSpan(
    toolUse.name,
    (s) => { s.setTool(toolUse.name); return runTool(toolUse); },
    { kind: 'tool', input: toolUse.input },
  );

  messages.push({ role: 'assistant', content: r.content });
  messages.push({ role: 'user', content: [{ type: 'tool_result', tool_use_id: toolUse.id, content: JSON.stringify(result) }] });
}
The resulting waterfall makes the loop trivial to debug: alternating llm and tool spans, with input/output visible at every step.

Streaming

messages.stream accumulates input_tokens from message_start and output_tokens from message_delta events. The span closes when the stream iterator finishes. If you consume the stream and throw partway, the span is still recorded (as error) with tokens seen so far — useful for diagnosing timeouts.

Auto vs manual cheat-table

OperationAuto?If manual, use
messages.createyes
messages.streamyes
messages.count_tokensyes
messages.batches.createnowithSpan({ kind: 'generic' }) on submit + a second span when you poll results
Tool executionsno (your code)withSpan({ kind: 'tool' }) with setTool(name)
Prompt caching (cache_control)partial — tokens include cache_creation + cache_readRead span.attributes['gen_ai.usage.cache_read_input_tokens'] in the detail drawer

Gotchas

  • Import order matters: trodo.init before import Anthropic from '@anthropic-ai/sdk' or the instrumentation may miss the prototype patch. Safest pattern is a top-of-file import trodo from 'trodo-node'; trodo.init(...).
  • Anthropic errors (rate limit, bad model) surface as status='error' on the span with error_type='APIError' and the full message. The surrounding run’s error_count rolls up.