To enable custom AI model configuration for your organization, contact Omni support. Once enabled, Organization Admin permissions are required to access and modify AI model settings.
ai_settings parameter in the Omni model.
If you prefer to use a different provider or your own API key, Omni also supports the following providers: Anthropic Direct, OpenAI, Snowflake Cortex, Databricks Genie, or Grok (xAI). Provider settings are configured in Settings > AI > Model.
Supported providers
| Provider | Description |
|---|---|
| AWS Bedrock | Default provider. Uses Omni’s managed AWS credentials and model selection. |
| Anthropic Direct | Direct API access to Anthropic’s Claude models. Allows you to select which model is used. |
| OpenAI | Access to GPT models. Base URL support allows integration with OpenAI-compatible APIs like Azure OpenAI and Ollama. Supports custom model identifiers for bring-your-own-model (BYOM) configurations. |
| Snowflake Cortex | Access to Anthropic’s Claude models through Snowflake Cortex. |
| Databricks Genie | Access to models via Databricks serving endpoints. See Databricks Genie setup for details. |
| Grok (xAI) | Access to xAI’s Grok models. |
Configuration options
Omni uses a three-tier model system to balance performance and cost across different AI features:- Smartest model - The most capable model, used for the most complex reasoning tasks requiring advanced capabilities.
- Standard model - A balanced model for typical query generation and detailed questions.
- Fastest model - An optimized model for simpler, high-volume tasks like text completion and basic assistance.
Available settings
- Provider - The AI model provider to use for your organization. The provider determines which models are available and how API requests are handled.
- Smartest model - The most capable model in your provider’s lineup, used for complex reasoning tasks. Not available for AWS Bedrock or Databricks Genie, which use region-based model selection.
- Standard model - The model used for typical query generation and analytical tasks. Choose a balanced model for good performance across use cases.
- Fastest model - The model used for simpler, high-volume tasks like text completion and basic assistance. A smaller, faster model is recommended to optimize performance and cost.
- Custom model - Applicable only to OpenAI. Override the default model selection with a custom model ID. See Custom model identifiers with OpenAI for more information.
- API key - Your API key for the selected provider. API keys are securely stored and not displayed after saving. Required for:
- Anthropic Direct
- OpenAI - Applicable to built-in models and custom models
- Snowflake Cortex - Use a Snowflake Programmatic Access Token (PAT)
- Databricks Genie - Use a Databricks Personal Access Token (PAT)
- Grok
- Base URL - The base URL for API requests. Available for:
- OpenAI - Use this to connect to OpenAI-compatible APIs like Azure OpenAI or self-hosted Ollama instances
- Snowflake Cortex - Provide your Snowflake account’s base URL
- Databricks Genie - Provide your Databricks workspace URL (e.g.,
https://your-workspace.cloud.databricks.com)