Skip to main content
Organization Admin permissions are required to access and modify AI settings.
The Settings > AI tab controls which AI features are enabled in your organization. All of Omni’s AI features are enabled by default, but you can use this tab to selectively enable or disable features.

General

A next to a setting indicates that the related feature uses an LLM to process query results. Refer to the AI data privacy documentation for information on how Omni’s AI features process data.
Enables AI features in the instance. Must be enabled to individually enable or disable the other AI features in this tab.
Enables the Query helper in workbooks and the standalone AI Assistant page. There are additional subsettings that can be enabled:
  • Chat - When enabled, the AI Assistant page is accessible outside of workbooks. This also enables embedding the AI Assistant into external applications.
  • Blank workbook topic picking - Enables the Ask AI option in blank workbook tabs.
  • Read result data - Allows the AI to read query results. Used to respond to summarization prompts, detect anomalies, and highlight insights in the chat interface. Must be enabled to use the Dashboard Assistant.
  • File uploads - Allows users to upload files and images in chat conversations for analysis and context
Controls whether users can authorize external AI tools (like Claude Desktop, ChatGPT, or Cursor) to connect to Omni via the MCP Server. When disabled:
  • New OAuth authorization requests return a 403 error
  • Existing MCP grants are blocked from making tool calls
This setting defaults to enabled. To configure Personal Access Tokens, navigate to Settings > API access.

Branding

The AI > Branding tab is used to customize the appearance of the AI chat interface, including the Query helper, AI Assistant, and Dashboard Assistant. These settings impact the AI chat experience, including those that are externally embedded. To see how your changes will impact the AI assistant, click Save.
The name of the AI assistant, used in chat conversations.
A publicly accessible image, used as the AI assistant’s icon. A square PNG (128x128 pixels) will produce the best results.
An optional greeting, displayed at the beginning of a chat session.
Optional additional text, displayed at the beginning of a chat session.
Placeholder text to display in the end user’s chat box.

Alternative model providers and custom models

To enable custom AI model configuration for your organization, contact Omni support. Once enabled, Organization Admin permissions are required to access and modify AI model settings.
The Settings > AI > Model tab allows you to use your own LLM API key to configure a custom AI model provider for your organization. By default, Omni uses AWS Bedrock and you can control the default Claude model used for various AI features by adjusting the ai_settings parameter in the Omni model. If you prefer to use a different provider or your own API key, Omni also supports the following providers: Anthropic Direct, OpenAI, Snowflake Cortex, or Grok (xAI).

Supported providers

ProviderAPI key requiredBase URLDescription
AWS BedrockNoNoDefault provider. Uses Omni’s managed AWS credentials and model selection.
Anthropic DirectYesNoDirect API access to Anthropic’s Claude models. Allows you to select which model is used.
OpenAIYesYesAccess to GPT models. Base URL support allows integration with OpenAI-compatible APIs like Azure OpenAI and Ollama. Supports custom model identifiers for bring-your-own-model (BYOM) configurations.
Snowflake CortexYesYesAccess to Anthropic’s Claude models through Snowflake Cortex. Requires a Snowflake Programmatic Access Token (PAT) and base URL.
Grok (xAI)YesNoAccess to xAI’s Grok models.

Configuration options

Omni has not thoroughly tested all model options. Performance may vary depending on the model selected.
  • Provider - The AI model provider to use for your organization. The provider determines which models are available and how API requests are handled.
  • Query model - The model used for complex reasoning tasks, such as generating queries and answering detailed questions. Choose a more capable model (such as Sonnet-class for Anthropic or GPT-4o for OpenAI) for best results.
  • Text model - The model used for simpler tasks, such as text completion and basic assistance. A smaller, faster model (such as Haiku-class for Anthropic or GPT-4o Mini for OpenAI) is recommended.
  • Custom model - Applicable only to OpenAI. Override the default model selection with a custom model ID. Use this option to specify a model not listed in the default options, such as a fine-tuned model or a newer release. See Using custom model identifiers with OpenAI for more information.
  • API key - Your API key for the selected provider. API keys are securely stored and not displayed after saving. Required for:
    • Anthropic Direct
    • OpenAI
    • Snowflake Cortex - Use a Snowflake Programmatic Access Token (PAT)
    • Grok
  • Base URL - The base URL for API requests. Available for:
    • OpenAI - Use this to connect to OpenAI-compatible APIs like Azure OpenAI or self-hosted Ollama instances
    • Snowflake Cortex - Provide your Snowflake account’s base URL

Using custom model identifiers with OpenAI

Custom models default to a 400,000 token context window.
When using OpenAI as your provider, you can specify custom model identifiers to use models not listed in the default dropdowns. This is useful for:
  • Using locally-hosted models via Ollama (e.g., llama3.1:70b)
  • Using Azure OpenAI deployments with custom names
  • Using newly-released models before they’re added to the default list
  • Using fine-tuned or specialized models
To use a custom model identifier:
  1. Select OpenAI as the Provider.
  2. In the Query model or Text model dropdown, select Custom model identifier.
  3. Enter your model identifier in the Custom model field that appears (e.g., llama3.1:70b, gpt-4-turbo-2024-04-09, or your Azure deployment name).
  4. If your endpoint uses a different base URL (such as for Azure OpenAI or Ollama), configure the Base URL field.
  5. Click Save.
Your custom model identifier will be saved and automatically loaded when you return to the settings page.