Documentation Index
Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
What auto-instruments
Installopentelemetry-instrumentation-mistralai alongside the Python mistralai SDK. Node support is not available from the upstream contrib project — use trodo.llm with the default extractUsage (OpenAI-shaped tokens).
| Call | Span kind | Auto-extracted |
|---|---|---|
client.chat.complete | llm | model, tokens, messages, completion |
client.chat.stream | llm | Same, accumulated |
client.embeddings.create | llm | model, input tokens |
client.fim.complete (code completion) | llm | model, tokens |
mistralai.
Install
Minimum example
Node.js workaround
extractUsage handles Mistral’s OpenAI-shaped usage.prompt_tokens / completion_tokens out of the box.
Auto vs manual cheat-table
| Operation | Auto? (Python) | Auto? (Node) | Notes |
|---|---|---|---|
chat.complete / chat.stream | yes | no | Use trodo.llm in Node |
embeddings.create | yes | no | Same |
fim.complete | yes | no | Same |
agents (beta) | no | no | Wrap the orchestrator as wrapAgent, tools as withSpan |
files, fine_tuning, models | no | no | Control plane |
Gotchas
- Mistral’s tool-calling uses the OpenAI-compatible schema — the same tool-use pattern shown on the OpenAI page works verbatim.
- Streaming chat closes the span when the generator is exhausted. If your client disconnects early, you’ll see a partial span with
error_type='StreamAbort'— the run’serror_countincrements.