Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What auto-instruments

Install @opentelemetry/instrumentation-langchain (Node) or opentelemetry-instrumentation-langchain (Python). Every chain/agent/tool invocation inside wrapAgent becomes a span, with the underlying LLM calls nested below.
ConstructSpan kindNested children
AgentExecutor.invoke / arunagentllm (each step) + tool (each tool call)
RunnableSequence / LCEL pipegenericWhatever the pipe contains
LLMChaingenericllm child
@tool / StructuredTooltool
Retrievers (VectorStoreRetriever, BM25, etc.)retrieval
Embeddingsllm
LangChain’s callback manager is what produces the OTel events; the Trodo processor maps them to the schema.

Install

npm install langchain @langchain/core @opentelemetry/instrumentation-langchain
pip install langchain langchain-openai opentelemetry-instrumentation-langchain

Minimum example — ReAct agent with tools

import trodo, os
from langchain.agents import create_react_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain import hub

trodo.init(site_id=os.environ["TRODO_SITE_ID"])

@tool
def get_weather(city: str) -> str:
    """Return the current weather in a city."""
    return f"Sunny in {city}, 72F"

@tool
def search(q: str) -> str:
    """Search the web."""
    return f"Top result for {q}: ..."

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, [get_weather, search], prompt)
executor = AgentExecutor(agent=agent, tools=[get_weather, search])

with trodo.wrap_agent("langchain-react") as run:
    result = executor.invoke({"input": "What's the weather in SF, then search for coffee shops there?"})
    run.set_output(result["output"])
Resulting tree:
run (wrapAgent)
 └─ agent   AgentExecutor.invoke
      ├─ llm    ChatOpenAI (step 1: pick a tool)
      ├─ tool   get_weather
      ├─ llm    ChatOpenAI (step 2: pick a tool)
      ├─ tool   search
      └─ llm    ChatOpenAI (step 3: final answer)

LCEL

LCEL chains (prompt | llm | parser) auto-capture as a single generic span per pipe with the LLM nested. If you want each stage visible, wrap stages in explicit runnables or use trodo.trace():
const prepare = trodo.trace('prepare', async (x) => buildContext(x));
const answer = trodo.llm('answer', async (ctx) => llm.invoke(ctx), { model: 'gpt-4o-mini' });

Retrievers

Any subclass of BaseRetriever emits a span with kind='retrieval'. The query is on span.input, the retrieved docs on span.output, and metadata.doc_count shows how many were returned.

Auto vs manual cheat-table

ConstructAuto?Notes
AgentExecutoryesFull waterfall
create_react_agent / create_openai_tools_agentyesSteps appear as child llm + tool
LCEL `` pipeyesOne generic parent per pipe
@tool + StructuredToolyeskind='tool', tool name auto-filled
Retrieversyeskind='retrieval'
Custom Runnable subclasspartialOnly if it emits on_chain_start / on_chain_end callbacks — wrap manually otherwise
LangGraph nodesyes (Python)Each node = one span; edges are invisible
Streaming (.stream, .astream)yesTokens from the final chunk

Gotchas

  • LangChain’s tracer expects an active OTel context. trodo.wrapAgent opens one; running chains outside it produces no spans silently.
  • Token counts come from the underlying LLM wrapper (e.g. ChatOpenAI) — a custom LLM that doesn’t return usage in its response will record a span with no tokens. Set extractUsage on a trodo.llm wrapper instead.
  • Very long LCEL pipelines can produce hundreds of spans per run. The default span cap is 500 per run; increase via trodo.init({ maxSpansPerRun: 2000 }) if you actually need all of them.