- Expand query scope — Let the AI query any view in the model - not just topics - for more unconstrained analysis
- Enable analysis validation — Have the AI automatically check its work for errors before returning results
- Manage conversation context — Control how aggressively Claude prunes conversation history to balance cost optimization with context preservation
- Tune model and thinking levels — Supported for Claude models only. Control the tradeoff between speed and accuracy for different task types (analysis, model building, summarization)
For curated, self-service analytics, we recommend organizing your views into topics and keeping
query_all_views_and_fields disabled. Topics provide better structure, improved performance, and a more guided experience for users.Syntax
ai_settings:
query_all_views_and_fields: <enabled | disabled | default>
validate_analysis: <enabled | disabled | default>
conversation_prune_length: <short | medium | max | long>
analyze_configuration:
model: <smartest | standard | fastest | opus | sonnet | haiku>
thinking: <none | low | medium | high>
build_configuration:
model: <smartest | standard | fastest | opus | sonnet | haiku>
thinking: <none | low | medium | high>
simple_summarize_configuration:
model: <smartest | standard | fastest | opus | sonnet | haiku>
thinking: <none | low | medium | high>
Properties
An object that contains various AI settings.
Hide ai_settings properties
Hide ai_settings properties
When enabled, the Omni Agent and Workbook Agent can generate queries on any view in the model, not just views that are organized within topics. This is useful when you have ad-hoc tables that users want to query without creating a dedicated topic for each one.Accepted values:
-
enabled- The AI can query any view in the model -
disabled- The AI can only query views organized within topics -
default- Uses Omni’s default behavior (enabled)If your instance was created on or before March 5, 2026, the default behavior isdisabled. Instances created after this date areenabledby default.
- The AI can use any view in the model as a starting point for queries
- The AI can discover and join other views that are related to the selected view
- Users don’t need to specify a topic when asking the AI to generate queries
When using the Generate a query API, this setting must be enabled and the
queryAllViews parameter must be set to true to allow querying views outside of topics. The Omni Agent and Workbook Agent in the UI apply this behavior automatically when the setting is enabled.When enabled, the Omni Agent and Workbook Agent automatically validate their analysis before returning results to users. The AI checks for critical errors such as degenerate pivot results, incorrect date ranges, or hallucinated calculations, and self-corrects issues before the user sees them.Accepted values:
enabled- Validation is always performed after the AI completes analytical workdisabled- Validation is never performeddefault- Uses the default behavior (disabled)
- Review its most recent queries and calculations for critical errors
- Check for common issues such as:
- Wrong topics selected
- Incorrect date filters (e.g., period-over-period date ranges)
- Degenerate calculation results (e.g., all zeros)
- Hallucinated numbers not supported by the data
- Malformed pivots or aggregations
- Automatically fix issues if critical errors are detected
- Display a validation indicator in the chat interface
Validation runs automatically at the end of each analytical turn. To prevent infinite loops, validation is limited to a maximum of 2-3 attempts per turn.
Supported for Claude models only. Controls how aggressively Claude manages conversation context by pruning conversation history when LLM token limits are reached. This setting allows you to balance cost optimization against context preservation for longer conversations.Accepted values:
When conversation context reaches the specified token threshold, Claude automatically prunes older messages from the conversation history to stay within limits while preserving the most recent and relevant context.
| Value | Pruning triggers at | Description |
|---|---|---|
short | 75,000 tokens | Aggressive pruning, cost-optimized |
medium | 125,000 tokens | Balanced approach |
long | 175,000 tokens | Default. Preserves most context. |
max | 175,000 tokens | Reserved for future expansion to >200k context sizes |
The
max value currently matches long (175k LLM tokens) but is reserved to allow future expansion to >200k context sizes. Using max as the default ensures you’ll automatically benefit from increased context limits as they become available, while long will remain at 175k LLM tokens.Controls the model and thinking level for complex analytical tasks, including query generation, visualization generation, formulas, calculations, AI summary visualization, and the agentic coordinator.
Show analyze_configuration properties
Show analyze_configuration properties
The model tier to use. Accepted values:
smartest, standard, fastest (provider-agnostic) or opus, sonnet, haiku (Anthropic-specific, supported for backwards compatibility).The thinking level, which controls how much reasoning the model performs before responding. Higher levels use more tokens but can improve accuracy on complex tasks. Accepted values:
none, low, medium, high.Controls the model and thinking level for model-building tasks, including the modeling subagent, learn from conversation, and topic metadata generation.
Show build_configuration properties
Show build_configuration properties
The model tier to use. Accepted values:
smartest, standard, fastest (provider-agnostic) or opus, sonnet, haiku (Anthropic-specific, supported for backwards compatibility).The thinking level, which controls how much reasoning the model performs before responding. Accepted values:
none, low, medium, high.Controls the model and thinking level for lightweight tasks, including subtitle and description summaries, documentation search, and field metadata.
Show simple_summarize_configuration properties
Show simple_summarize_configuration properties
The model tier to use. Accepted values:
smartest, standard, fastest (provider-agnostic) or opus, sonnet, haiku (Anthropic-specific, supported for backwards compatibility).The thinking level, which controls how much reasoning the model performs before responding. Accepted values:
none, low, medium, high.Examples
Enable querying all views
ai_settings:
query_all_views_and_fields: enabled
Restrict AI to topic views only
ai_settings:
query_all_views_and_fields: disabled
Enable AI analysis validation
ai_settings:
validate_analysis: enabled
Disable AI analysis validation
ai_settings:
validate_analysis: disabled
Override all model categories (Anthropic-specific)
ai_settings:
analyze_configuration:
model: opus
thinking: medium
build_configuration:
thinking: high
simple_summarize_configuration:
thinking: low
Override model categories (provider-agnostic)
ai_settings:
analyze_configuration:
model: smartest
thinking: medium
build_configuration:
model: standard
thinking: high
simple_summarize_configuration:
model: fastest
thinking: low
Claude - Optimize for cost with aggressive context pruning
ai_settings:
conversation_prune_length: short
Claude - Balance cost and context preservation
ai_settings:
conversation_prune_length: medium
Claude - Preserve maximum context
ai_settings:
conversation_prune_length: max