Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

Streaming works identically to non-streaming: wrapAgent wraps the call, the auto-instrumenter captures the final usage from the stream, and tokens roll up into the run.
import trodo from 'trodo-node';
import Anthropic from '@anthropic-ai/sdk';

trodo.init({ siteId: process.env.TRODO_SITE_ID });
const anthropic = new Anthropic();

export async function summarise(userId, text) {
  const { result } = await trodo.wrapAgent(
    'summariser',
    async (run) => {
      run.setInput({ text });

      const stream = anthropic.messages.stream({
        model: 'claude-3-5-sonnet-latest',
        max_tokens: 512,
        messages: [{ role: 'user', content: `Summarise:\n\n${text}` }],
      });

      let full = '';
      for await (const event of stream) {
        if (event.type === 'content_block_delta' && event.delta.type === 'text_delta') {
          full += event.delta.text;
          process.stdout.write(event.delta.text);   // stream to your UI
        }
      }
      await stream.finalMessage();   // ensures usage is captured

      run.setOutput({ summary: full });
      return full;
    },
    { distinctId: userId },
  );

  return result;
}

Notes

  • Anthropic streamingstream.finalMessage() is what emits the usage event Trodo reads. Call it before returning.
  • OpenAI streaming — pass stream_options: { include_usage: true } on the request. Without it the final usage chunk is suppressed and tokens stay zero.
  • Errors mid-stream — the span closes with status='error' and whatever output_tokens were seen so far. Useful for diagnosing timeout vs. hard failure.

See also