Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Use this when the provider isn’t auto-instrumented (local Ollama, vLLM, a self-hosted inference server, a partner API). You emit every span yourself.
import trodo from 'trodo-node';

trodo.init({ siteId: process.env.TRODO_SITE_ID });

const LLM_URL = 'http://ollama.internal/api/chat';

export async function answer(userId, question) {
  const { result } = await trodo.wrapAgent('raw-http-agent', async (run) => {
    run.setInput({ question });

    const body = {
      model: 'llama3.1:70b',
      messages: [{ role: 'user', content: question }],
    };

    // Option A — withSpan + setLlm.  Explicit control over span timing.
    const respA = await trodo.withSpan({ kind: 'llm', name: 'ollama.chat' }, async (span) => {
      span.setInput(body);
      const r = await fetch(LLM_URL, { method: 'POST', body: JSON.stringify(body) }).then((x) => x.json());
      span.setLlm({
        model: r.model,
        provider: 'ollama',
        inputTokens: r.prompt_eval_count,
        outputTokens: r.eval_count,
      });
      span.setOutput(r);
      return r;
    });

    // Option B — trackLlmCall.  One-shot, less code.
    const r = await fetch(LLM_URL, { method: 'POST', body: JSON.stringify(body) }).then((x) => x.json());
    await trodo.trackLlmCall({
      model: r.model,
      provider: 'ollama',
      inputTokens: r.prompt_eval_count,
      outputTokens: r.eval_count,
      prompt: body,
      completion: r,
      metadata: { endpoint: '/api/chat' },
    });

    run.setOutput({ answer: r.message?.content });
    return r.message?.content;
  }, { distinctId: userId });

  return result;
}

Option A vs Option B

Both land identical spans in the database. Pick by style:
withSpan + setLlmtrackLlmCall
ShapeWrapper around the callCall-then-record
ErrorsThrown exceptions set status='error' automaticallyYou have to catch + record yourself
Prefer forThe call is the span’s unit of workYou already have the response object

Custom pricing

If the pricing table doesn’t cover your model — self-hosted, zero-cost, negotiated rate — pass cost explicitly:
await trodo.trackLlmCall({
  model: 'llama3.1:70b',
  provider: 'ollama-self-hosted',
  inputTokens: 1200,
  outputTokens: 320,
  cost: 0,   // self-hosted, don't bill
});

See also