Skip to main content
Omni’s AI features consume LLM tokens each time they process a request. The Credit Tracking dashboard, available in the Analytics section of Omni, helps you understand how credits are being used across your organization.

Requirements

Organization Admin permissions are required to access the Credit Tracking dashboard.

What are credits?

Credits are the unit of measurement for AI usage. Every interaction with an AI feature in Omni — such as asking a question, generating a summary, asking AI to make data model changes, or using the MCP Server — consumes tokens from an LLM provider. Omni rolls those up into credits. Credit usage varies depending on several factors:
  • Task complexity — More complex questions, multi-step analysis, or larger datasets use more credits than simple lookups. Each message in a session carries prior context, so long running sessions can compound.
  • Data model and context size — Omni sends context from the data model to the LLM to improve answer accuracy (field descriptions, ai_context, synonyms, etc). Larger models use more credits.
  • LLM model — More capable models (e.g., Sonnet-class) consume more credits per request than lighter models (e.g., Haiku-class). See AI model settings to configure tiers.

Using the Credit Tracking dashboard

To access the Credit Tracking dashboard, click Analytics in the left navigation of the Omni app and select the Credit Tracking dashboard. The dashboard displays credit consumption over time and can break down usage in various ways, such as by feature and user.

Optimizing credit usage

  • Adjust the LLM model tier. See AI model settings.
  • Trim unused fields, descriptions, or ai_context from the data model. Reducing context may lower per-call credit counts, but can increase the number of turns needed to get a good answer. The goal is optimizing efficiency overall.
Set up alerts to be notified when nearing your credit limit. See Usage analytics for more information.

FAQ

Omni’s AI is an agentic system. For a given prompt or request, it may fire off multiple tool calls to get the answer or complete the task. For example, asking “show me revenue by region” might involve the AI searching the data model for the right fields, building the query, creating a visualization, and summarizing results — each as a separate step. Each of these steps is logged individually in the tracking data, which is why a single user prompt can appear as several rows.
These are “tool calls” — intermediate steps where the AI determines which action to take next. They don’t have a user-facing prompt because they are part of the AI’s internal reasoning as it works through a request. See the question above for more context on why these occur.
Credit usage can vary between identical prompts due to differences in conversation context (earlier messages in the session) or the AI choosing a different path to arrive at the answer.

Next steps