Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

What auto-instruments

Install opentelemetry-instrumentation-mistralai alongside the Python mistralai SDK. Node support is not available from the upstream contrib project — use trodo.llm with the default extractUsage (OpenAI-shaped tokens).
CallSpan kindAuto-extracted
client.chat.completellmmodel, tokens, messages, completion
client.chat.streamllmSame, accumulated
client.embeddings.createllmmodel, input tokens
client.fim.complete (code completion)llmmodel, tokens
Provider field is mistralai.

Install

pip install mistralai opentelemetry-instrumentation-mistralai

Minimum example

import trodo, os
from mistralai import Mistral

trodo.init(site_id=os.environ["TRODO_SITE_ID"])
client = Mistral(api_key=os.environ["MISTRAL_API_KEY"])

with trodo.wrap_agent("mistral-bot") as run:
    r = client.chat.complete(
        model="mistral-large-latest",
        messages=[{"role": "user", "content": "Summarise HNSW."}],
    )
    run.set_output(r.choices[0].message.content)

Node.js workaround

import trodo from 'trodo-node';
import { Mistral } from '@mistralai/mistralai';

trodo.init({ siteId: process.env.TRODO_SITE_ID });
const client = new Mistral({ apiKey: process.env.MISTRAL_API_KEY });

const chat = trodo.llm(
  'mistral.chat',
  async (messages) => client.chat.complete({ model: 'mistral-large-latest', messages }),
  { model: 'mistral-large-latest', provider: 'mistralai' },
);

await trodo.wrapAgent('mistral-bot', async () => {
  const r = await chat([{ role: 'user', content: 'Summarise HNSW.' }]);
  return r.choices[0].message.content;
});
The default extractUsage handles Mistral’s OpenAI-shaped usage.prompt_tokens / completion_tokens out of the box.

Auto vs manual cheat-table

OperationAuto? (Python)Auto? (Node)Notes
chat.complete / chat.streamyesnoUse trodo.llm in Node
embeddings.createyesnoSame
fim.completeyesnoSame
agents (beta)nonoWrap the orchestrator as wrapAgent, tools as withSpan
files, fine_tuning, modelsnonoControl plane

Gotchas

  • Mistral’s tool-calling uses the OpenAI-compatible schema — the same tool-use pattern shown on the OpenAI page works verbatim.
  • Streaming chat closes the span when the generator is exhausted. If your client disconnects early, you’ll see a partial span with error_type='StreamAbort' — the run’s error_count increments.