Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What auto-instruments

Install @opentelemetry/instrumentation-bedrock (Node) or opentelemetry-instrumentation-bedrock (Python). Every Bedrock client call inside wrapAgent becomes a span.
CallSpan kindAuto-extracted
BedrockRuntimeClient.Conversellmmodel (incl. inference profile), input/output tokens, stop_reason, messages
BedrockRuntimeClient.ConverseStreamllmSame, accumulated
BedrockRuntimeClient.InvokeModelllmmodel; body is opaque — prompt/completion extraction is provider-specific
BedrockRuntimeClient.InvokeModelWithResponseStreamllmSame, accumulated
BedrockAgentRuntimeClient.InvokeAgentagentagent id, session id, trace events
Provider field is the model family — anthropic, meta, amazon, cohere, mistral, ai21 — extracted from the modelId string.

Install

npm install @aws-sdk/client-bedrock-runtime @opentelemetry/instrumentation-bedrock
pip install boto3 opentelemetry-instrumentation-bedrock

Minimum example — Converse API

import trodo from 'trodo-node';
import { BedrockRuntimeClient, ConverseCommand } from '@aws-sdk/client-bedrock-runtime';

trodo.init({ siteId: process.env.TRODO_SITE_ID });
const client = new BedrockRuntimeClient({ region: 'us-east-1' });

await trodo.wrapAgent('bedrock-converse', async (run) => {
  const r = await client.send(new ConverseCommand({
    modelId: 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    messages: [{ role: 'user', content: [{ text: 'Explain TimescaleDB in one sentence.' }] }],
  }));
  run.setOutput(r.output.message.content[0].text);
});

InvokeModel (per-provider body format)

InvokeModel sends a provider-specific JSON blob. The instrumentation records modelId, inputTokens, outputTokens if the response envelope carries them, but the prompt/completion fields depend on the body shape. Prefer Converse — it’s uniform and fully auto-captured. If you must use InvokeModel, set input and output on the span yourself:
await trodo.withSpan('bedrock.invoke', async (s) => {
  const r = await client.send(new InvokeModelCommand({ modelId, body }));
  const decoded = JSON.parse(new TextDecoder().decode(r.body));
  s.setLlm({
    model: modelId,
    provider: modelId.split('.')[0],
    inputTokens: decoded.usage?.input_tokens,
    outputTokens: decoded.usage?.output_tokens,
  });
  s.setInput(JSON.parse(body));
  s.setOutput(decoded);
  return decoded;
}, { kind: 'llm' });

Bedrock Agents

InvokeAgent returns a stream of trace events describing the agent’s reasoning steps. The instrumentation unpacks these into nested llm + tool spans so the waterfall shows orchestration at the Bedrock level. Tool executions handled by action groups still appear as tool spans with their input/output.

Auto vs manual cheat-table

OperationAuto?Notes
Converse / ConverseStreamyesUniform across models
InvokeModel / InvokeModelWithResponseStreampartialModel + tokens captured; prompt/completion need manual setInput/setOutput
InvokeAgentyesFull trace tree
Knowledge Bases (Retrieve, RetrieveAndGenerate)yeskind='retrieval' + optional llm child
Guardrails (ApplyGuardrail)noWrap in withSpan({ kind: 'tool' }) if you want it visible

Gotchas

  • provider is derived from the modelId prefix — if you use a cross-region inference profile (us.anthropic....), the prefix is still anthropic, which is what you want for aggregation.
  • Bedrock errors (ValidationException, ThrottlingException) appear as status='error' with the full AWS error code in error_type.
  • For boto3 Python, ensure your AWS creds are set via env or IAM role — the instrumentation doesn’t touch creds, but a 403 still produces an error span that’s useful for debugging.