Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
What auto-instruments
Install@opentelemetry/instrumentation-bedrock (Node) or opentelemetry-instrumentation-bedrock (Python). Every Bedrock client call inside wrapAgent becomes a span.
| Call | Span kind | Auto-extracted |
|---|---|---|
BedrockRuntimeClient.Converse | llm | model (incl. inference profile), input/output tokens, stop_reason, messages |
BedrockRuntimeClient.ConverseStream | llm | Same, accumulated |
BedrockRuntimeClient.InvokeModel | llm | model; body is opaque — prompt/completion extraction is provider-specific |
BedrockRuntimeClient.InvokeModelWithResponseStream | llm | Same, accumulated |
BedrockAgentRuntimeClient.InvokeAgent | agent | agent id, session id, trace events |
anthropic, meta, amazon, cohere, mistral, ai21 — extracted from the modelId string.
Install
Minimum example — Converse API
- Node.js
- Python
InvokeModel (per-provider body format)
InvokeModel sends a provider-specific JSON blob. The instrumentation records modelId, inputTokens, outputTokens if the response envelope carries them, but the prompt/completion fields depend on the body shape. Prefer Converse — it’s uniform and fully auto-captured.
If you must use InvokeModel, set input and output on the span yourself:
Bedrock Agents
InvokeAgent returns a stream of trace events describing the agent’s reasoning steps. The instrumentation unpacks these into nested llm + tool spans so the waterfall shows orchestration at the Bedrock level. Tool executions handled by action groups still appear as tool spans with their input/output.
Auto vs manual cheat-table
| Operation | Auto? | Notes |
|---|---|---|
Converse / ConverseStream | yes | Uniform across models |
InvokeModel / InvokeModelWithResponseStream | partial | Model + tokens captured; prompt/completion need manual setInput/setOutput |
InvokeAgent | yes | Full trace tree |
Knowledge Bases (Retrieve, RetrieveAndGenerate) | yes | kind='retrieval' + optional llm child |
Guardrails (ApplyGuardrail) | no | Wrap in withSpan({ kind: 'tool' }) if you want it visible |
Gotchas
provideris derived from the modelId prefix — if you use a cross-region inference profile (us.anthropic....), the prefix is stillanthropic, which is what you want for aggregation.- Bedrock errors (
ValidationException,ThrottlingException) appear asstatus='error'with the full AWS error code inerror_type. - For boto3 Python, ensure your AWS creds are set via env or IAM role — the instrumentation doesn’t touch creds, but a 403 still produces an error span that’s useful for debugging.