Requirements
Organization Admin permissions are required to access the Token Tracking dashboard.What are tokens?
Tokens are the unit of measurement for AI model usage. Every interaction with an AI feature in Omni — such as asking a question, generating a summary, asking AI to make data model changes, or using the MCP Server — consumes tokens. Token usage varies depending on several factors:- Task complexity — More complex questions, multi-step analysis, or larger datasets use more tokens than simple lookups. Each message in a session carries prior context, so long running sessions can compound.
- Data model and context size — Omni sends context from the data model to the LLM to improve answer accuracy (field descriptions, ai_context, synonyms, etc). Larger models use more tokens.
- LLM model — More capable models (e.g., Sonnet-class) consume more tokens per request than lighter models (e.g., Haiku-class). See AI model settings to configure tiers.
Using the Token Tracking dashboard
To access the Token Tracking dashboard, click Analytics in the left navigation of the Omni app and select the Token Tracking dashboard. The dashboard displays token consumption over time and can break down usage in various ways, such as by feature and user.Seeing a lot of blank prompts with token usage? These are “tool calls” - that is, the AI determining which tool to call next to correctly answer the question.
Optimizing token usage
- Adjust the LLM model tier. See AI model settings.
- Trim unused fields, descriptions, or
ai_contextfrom the data model. Reducing context may lower per-call token counts, but can increase the number of turns needed to get a good answer. The goal is optimizing token efficiency overall.
Next steps
- AI settings — Configure AI features and model providers
- AI data privacy — Learn how Omni’s AI features process data
- Usage analytics — Explore other pre-built analytics dashboards
- Deliveries and alerts - Set up alerts to be notified about token usage