Use this file to discover all available pages before exploring further.
Use this parameter to:
Expand query scope — Let the AI query any view in the model - not just topics - for more unconstrained analysis
Enable analysis validation — Have the AI automatically check its work for errors before returning results
Manage conversation context — Control how aggressively Claude prunes conversation history to balance cost optimization with context preservation
Tune model and thinking levels — Supported for Claude models only. Control the tradeoff between speed and accuracy for different task types (analysis, model building, summarization)
For curated, self-service analytics, we recommend organizing your views into topics and keeping query_all_views_and_fields disabled. Topics provide better structure, improved performance, and a more guided experience for users.
When enabled, the Omni Agent and Workbook Agent can generate queries on any view in the model, not just views that are organized within topics. This is useful when you have ad-hoc tables that users want to query without creating a dedicated topic for each one.Accepted values:
enabled - The AI can query any view in the model
disabled - The AI can only query views organized within topics
default - Uses Omni’s default behavior (enabled)
If your instance was created on or before March 5, 2026, the default behavior is disabled. Instances created after this date are enabled by default.
When this setting is enabled:
The AI can use any view in the model as a starting point for queries
The AI can discover and join other views that are related to the selected view
Users don’t need to specify a topic when asking the AI to generate queries
When using the Generate a query API, this setting must be enabled and the queryAllViews parameter must be set to true to allow querying views outside of topics. The Omni Agent and Workbook Agent in the UI apply this behavior automatically when the setting is enabled.
When enabled, the Omni Agent and Workbook Agent automatically validate their analysis before returning results to users. The AI checks for critical errors such as degenerate pivot results, incorrect date ranges, or hallucinated calculations, and self-corrects issues before the user sees them.Accepted values:
enabled - Validation is always performed after the AI completes analytical work
disabled - Validation is never performed
default - Uses the default behavior (disabled)
When validation is enabled, the AI will:
Review its most recent queries and calculations for critical errors
Check for common issues such as:
Wrong topics selected
Incorrect date filters (e.g., period-over-period date ranges)
Degenerate calculation results (e.g., all zeros)
Hallucinated numbers not supported by the data
Malformed pivots or aggregations
Automatically fix issues if critical errors are detected
Display a validation indicator in the chat interface
Validation runs automatically at the end of each analytical turn. To prevent infinite loops, validation is limited to a maximum of 2-3 attempts per turn.
Supported for Claude models only. Controls how aggressively Claude manages conversation context by pruning conversation history when LLM token limits are reached. This setting allows you to balance cost optimization against context preservation for longer conversations.Accepted values:
Value
Pruning triggers at
Description
short
75,000 tokens
Aggressive pruning, cost-optimized
medium
125,000 tokens
Balanced approach
long
175,000 tokens
Default. Preserves most context.
max
175,000 tokens
Reserved for future expansion to >200k context sizes
When conversation context reaches the specified token threshold, Claude automatically prunes older messages from the conversation history to stay within limits while preserving the most recent and relevant context.
The max value currently matches long (175k LLM tokens) but is reserved to allow future expansion to >200k context sizes. Using max as the default ensures you’ll automatically benefit from increased context limits as they become available, while long will remain at 175k LLM tokens.
Controls the model and thinking level for complex analytical tasks, including query generation, visualization generation, formulas, calculations, AI summary visualization, and the agentic coordinator.
The model tier to use. Accepted values: smartest, standard, fastest (provider-agnostic) or opus, sonnet, haiku (Anthropic-specific, supported for backwards compatibility).
The thinking level, which controls how much reasoning the model performs before responding. Higher levels use more tokens but can improve accuracy on complex tasks. Accepted values: none, low, medium, high.
Controls the model and thinking level for model-building tasks, including the modeling subagent, learn from conversation, and topic metadata generation.
The model tier to use. Accepted values: smartest, standard, fastest (provider-agnostic) or opus, sonnet, haiku (Anthropic-specific, supported for backwards compatibility).
The model tier to use. Accepted values: smartest, standard, fastest (provider-agnostic) or opus, sonnet, haiku (Anthropic-specific, supported for backwards compatibility).