Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
What auto-instruments
Install@opentelemetry/instrumentation-langchain (Node) or opentelemetry-instrumentation-langchain (Python). Every chain/agent/tool invocation inside wrapAgent becomes a span, with the underlying LLM calls nested below.
| Construct | Span kind | Nested children |
|---|---|---|
AgentExecutor.invoke / arun | agent | llm (each step) + tool (each tool call) |
RunnableSequence / LCEL pipe | generic | Whatever the pipe contains |
LLMChain | generic | llm child |
@tool / StructuredTool | tool | — |
Retrievers (VectorStoreRetriever, BM25, etc.) | retrieval | — |
| Embeddings | llm | — |
Install
Minimum example — ReAct agent with tools
- Python
- Node.js
LCEL
LCEL chains (prompt | llm | parser) auto-capture as a single generic span per pipe with the LLM nested. If you want each stage visible, wrap stages in explicit runnables or use trodo.trace():
Retrievers
Any subclass ofBaseRetriever emits a span with kind='retrieval'. The query is on span.input, the retrieved docs on span.output, and metadata.doc_count shows how many were returned.
Auto vs manual cheat-table
| Construct | Auto? | Notes | |
|---|---|---|---|
AgentExecutor | yes | Full waterfall | |
create_react_agent / create_openai_tools_agent | yes | Steps appear as child llm + tool | |
| LCEL ` | ` pipe | yes | One generic parent per pipe |
@tool + StructuredTool | yes | kind='tool', tool name auto-filled | |
| Retrievers | yes | kind='retrieval' | |
Custom Runnable subclass | partial | Only if it emits on_chain_start / on_chain_end callbacks — wrap manually otherwise | |
| LangGraph nodes | yes (Python) | Each node = one span; edges are invisible | |
Streaming (.stream, .astream) | yes | Tokens from the final chunk |
Gotchas
- LangChain’s tracer expects an active OTel context.
trodo.wrapAgentopens one; running chains outside it produces no spans silently. - Token counts come from the underlying LLM wrapper (e.g.
ChatOpenAI) — a custom LLM that doesn’t returnusagein its response will record a span with no tokens. SetextractUsageon atrodo.llmwrapper instead. - Very long LCEL pipelines can produce hundreds of spans per run. The default span cap is 500 per run; increase via
trodo.init({ maxSpansPerRun: 2000 })if you actually need all of them.