Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
What auto-instruments
Install@opentelemetry/instrumentation-cohere (Node) or opentelemetry-instrumentation-cohere (Python).
| Call | Span kind | Auto-extracted |
|---|---|---|
client.chat | llm | model, tokens (meta.tokens), message, response |
client.generate | llm | model, tokens, prompt, completion |
client.embed | llm | model, input tokens, input count |
client.rerank | retrieval | query, doc count, top-N doc ids with scores |
client.classify | llm | model, input tokens |
cohere.
Install
Minimum example — chat + rerank
- Node.js
- Python
Token extraction
Cohere returns usage asresponse.meta.tokens.input_tokens / output_tokens. The instrumentation maps these into the standard OTel gen_ai.usage.* attributes, which Trodo reads. No config needed.
If you’re on the new Cohere v2 client (cohere.ClientV2), the API shape changed but the instrumentation covers both.
Auto vs manual cheat-table
| Operation | Auto? | Notes |
|---|---|---|
chat / chatStream | yes | — |
generate / generateStream | yes | — |
embed | yes | — |
rerank | yes | kind='retrieval' |
classify | yes | kind='llm' |
summarize (deprecated) | partial | Works but field labels may not match — wrap with trodo.llm for control |
| Datasets, fine-tuning, models list | no | Control plane — out of scope |
Gotchas
- Cohere’s rerank doesn’t return the document text, only
index+relevance_score. The span records the indices and scores; addspan.setOutput({...})yourself if you want the resolved text for the detail drawer. classifydoesn’t have a single “answer” — the span’s output is the full prediction list.