Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.trodo.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

72 tools total. All read-only unless otherwise noted. All scoped to the team bound to your token. Discovered automatically via tools/list — this page is for human reference.

Catalog & discovery (call these first)

These tools return the names of events, properties, agents, and spans that exist for your team. Always call them before passing names to analytical tools — guessing names returns empty results.
ToolScopeReturns
list_event_namesmcp:eventsDistinct event_name values tracked by the team.
list_event_propertiesmcp:eventsProperty names available on a given event.
list_property_valuesmcp:eventsDistinct values for one event × property (confirm exact spellings before filtering).
list_agent_namesmcp:agent_runsDistinct agent_name values in agent runs.
list_agent_run_dimensionsmcp:agent_runsTool names, LLM model names, and retrieval span names used inside runs.

Events analytics — mcp:events

ToolReturns
run_insights_queryTime-series counts, uniques, sums, averages, or other aggregations for a single event over a chosen window, with optional breakdown and filters.
run_funnel_queryMulti-step conversion funnel for an ordered event sequence: per-step user counts and conversion rates.
run_retention_queryCohort retention matrix for a starting + return event over day/week/month intervals.
run_flow_querySankey flow diagram around an anchor event: the N events before and M events after it, showing where users come from and go next. Returns the query DSL for use in create_report flow chart cards.
get_segment_comparisonTwo user segments compared across one or more event metrics.
get_utm_attribution_analysisEvent aggregated by UTM source, medium, campaign, term, content.
get_top_usersUsers ranked by event count, distinct sessions, or active days.
get_session_analysisSession totals, unique users, p50/p95 duration, bounce rate, sessions-per-user histogram.
get_property_distributionTop values + cardinality + null % for a property across events or users.
compare_periodsSame metric across two non-overlapping windows in one call.
get_event_correlationP(B given A) within a session or per user, plus baseline + lift.
get_anomaly_detectionDaily z-scored series for a metric; flags points exceeding a threshold.
get_cohort_compareTwo predicate-defined cohorts compared on a chosen KPI.
get_path_analysisMost common events before / after / surrounding a target event in the same session.

UX & technical health (also mcp:events)

ToolReturns
get_rage_click_analysisRage-click counts per page or element, affected user counts, top targets.
get_scroll_depth_analysisScroll depth per page or page-element, reach %, average max depth.
get_exit_intent_analysisExit-intent rates per page over a window.
get_form_abandonment_analysisForm completion rate, drop-off rate, drop-off field per page.
get_js_error_analysisJavaScript exception counts grouped by class + message, affected users.
get_error_impact_on_funnelPer-step error rates joined with funnel definition, impact on conversion.
get_page_performance_analysisCore Web Vitals (LCP, CLS, INP) + load-time percentiles per page.
get_network_error_analysisNetwork/HTTP error counts grouped by status, host, path.

Agent runs & traces — mcp:agent_runs

ToolReturns
list_agent_runsPaginated list of agent runs, filterable by agent, status, time, signals, cluster.
get_agent_runOne run + full span tree (LLM/tool/agent spans), feedback, conversation siblings.
search_agent_runsRuns whose recorded inputs/outputs semantically match a free-text query.
get_failed_user_attemptsRuns that semantically match a query AND show failure signals (errored, high rage, negative feedback).
get_run_metricsTotal runs, error rate, p50/p95 duration, total cost, total tokens.
get_token_cost_breakdownToken usage + cost grouped by agent, model, provider, or tool.
get_tool_call_analysisPer-tool span metrics: call count, error rate, p50/p95 duration, avg cost.
get_top_failure_modesMost frequent error classes from agent spans.
get_agent_feedback_summaryPositive/negative counts, avg rating, sample of recent comments.

Clusters — mcp:cluster

ToolReturns
list_use_case_clustersUse-case clusters with label, run count, error rate, scores, analysis JSON.
get_cluster_runsMember runs of one cluster, optionally grouped by conversation.
get_cluster_summaryCluster row + top tools used by members + top error types.

Issues (v2) — mcp:cluster

Typed multi-source issue detection. Replaces the deprecated list_issue_clusters.
ToolReturns
list_issuesAll detected issues for the team: type, severity, title, affected runs/users.
get_issue_detailsFull details for one issue including root-cause analysis.
get_issue_timelineTime-series of when an issue first appeared, worsened, or resolved.
get_issue_membersPaginated list of run IDs, span IDs, or signal IDs linked to a specific issue. Use after get_issue_details to page through the full affected set.
get_top_failing_toolsTools with the highest error rates and their associated issue clusters.
get_ux_rage_hotspotsPages/elements with elevated rage-click rates linked to detected issues.
set_issue_statusWrite — Update an issue status to acknowledged, resolved, or muted.

Evaluations — mcp:evals

ToolAnnotationsReturns
list_evaluatorsAll evaluators for the team: name, kind, enabled status, target kind, scoring config.
get_evaluatorFull config for one evaluator by name or ID.
get_eval_resultsPaginated eval result rows: run/span id, score, passed, evaluator, timestamps. Filterable by evaluator, pass/fail, time range, and target kind.
get_eval_results_for_runAll eval results attached to a specific agent run.
get_eval_results_for_spanAll eval results attached to a specific span.
list_pending_human_evalsHuman eval queue items awaiting review, filterable by evaluator and status.
submit_human_eval_gradeWriteSubmit a pass/fail grade and optional comment for a pending human eval item.
skip_human_evalWriteSkip a pending human eval item without grading it (moves it out of the queue).
create_evaluatorWriteCreate a new evaluator (kinds: llm_judge, python, typescript, human, composite).
update_evaluatorWriteUpdate any fields of an existing evaluator. Only provided fields are changed. Bumps version when rubric fields change.
toggle_evaluatorWriteEnable or disable an evaluator without bumping its version.
delete_evaluatorWritePermanently delete an evaluator and all its results. Cannot be undone.
test_evaluatorWriteOne-shot test run against a specific run or span — bypasses sampling and filters. Use to validate a rubric before going live.
backfill_evaluatorWriteRun an evaluator against historical runs/spans. Skips already-evaluated events. Works for all evaluator kinds; human kind enqueues grading tasks.
get_users_with_eval_scoreRequires mcp:evals + mcp:user:read_piiUsers ranked by their average eval score for a given evaluator.

Identity (PII) — mcp:user:read_pii

ToolReturns
get_user_profileOne user identity record by id/email/wallet — first/last visit, sessions, identifying fields, web3 metadata.
get_user_journeyChronological event timeline for one user.
find_usersUp to N user records whose distinct id, user id, wallet, or custom properties contain a fuzzy substring.
find_users_by_walletUser records whose primary wallet address matches a full or partial address. Returns distinct_id and wallet metadata for use with other tools.
get_users_for_issueUser identifiers and profile snippets for everyone affected by a specific issue. Combine with get_issue_details to investigate who is impacted.
get_users_for_eventUsers who fired a specific event within an optional time window and filter set.
list_groupsPaginated list of group identities (companies, accounts, teams) with member counts and custom properties.
get_group_profileFull profile of one group: custom properties (plan, MRR, region, etc.) and list of member user identifiers.

Reports — mcp:events

ToolAnnotationsReturns
create_reportWrite (destructiveHint: false, creates new resource)Creates a Trodo dashboard board and returns a shareable public URL. Sections can be text cards (markdown with tables, metrics, interpretation) or live chart cards (funnel, insights, retention, flow) — chart cards store the queryDsl and re-execute it live when the report is viewed. Mix both types in one report.

Combined-scope

ToolRequired scopes
get_user_agent_runsmcp:agent_runs + mcp:user:read_pii (joins agent runs with a user identifier)

Automation — mcp:cluster

ToolAnnotationsReturns
report_heal_branchWrite — agent callbackRecords the fresh branch the Heal agent pushed (status → branch_ready). The agent must never open a PR — the user opens the PR from the Trodo UI.

Discovering inputs

Every tool’s input schema is exposed via tools/list — call it from your client to get JSON Schema for every parameter. The MCP Inspector (npx @modelcontextprotocol/inspector) is the easiest way to browse them interactively.