Skip to main content
To enable custom AI model configuration for your organization, contact Omni support. Once enabled, Organization Admin permissions are required to access and modify AI model settings.
By default, Omni uses AWS Bedrock and you can control the default Claude model used for various AI features by adjusting the ai_settings parameter in the Omni model. If you prefer to use a different provider or your own API key, Omni also supports the following providers: Anthropic Direct, OpenAI, Snowflake Cortex, Databricks Genie, or Grok (xAI). Provider settings are configured in Settings > AI > Model.

Supported providers

ProviderDescription
AWS BedrockDefault provider. Uses Omni’s managed AWS credentials and model selection.
Anthropic DirectDirect API access to Anthropic’s Claude models. Allows you to select which model is used.
OpenAIAccess to GPT models. Base URL support allows integration with OpenAI-compatible APIs like Azure OpenAI and Ollama. Supports custom model identifiers for bring-your-own-model (BYOM) configurations.
Snowflake CortexAccess to Anthropic’s Claude models through Snowflake Cortex.
Databricks GenieAccess to models via Databricks serving endpoints. See Databricks Genie setup for details.
Grok (xAI)Access to xAI’s Grok models.

Configuration options

Omni has not thoroughly tested all model options. Performance may vary depending on the model selected.
Omni uses a three-tier model system to balance performance and cost across different AI features:
  • Smartest model - The most capable model, used for the most complex reasoning tasks requiring advanced capabilities.
  • Standard model - A balanced model for typical query generation and detailed questions.
  • Fastest model - An optimized model for simpler, high-volume tasks like text completion and basic assistance.

Available settings

  • Provider - The AI model provider to use for your organization. The provider determines which models are available and how API requests are handled.
  • Smartest model - The most capable model in your provider’s lineup, used for complex reasoning tasks. Not available for AWS Bedrock or Databricks Genie, which use region-based model selection.
  • Standard model - The model used for typical query generation and analytical tasks. Choose a balanced model for good performance across use cases.
  • Fastest model - The model used for simpler, high-volume tasks like text completion and basic assistance. A smaller, faster model is recommended to optimize performance and cost.
  • Custom model - Applicable only to OpenAI. Override the default model selection with a custom model ID. See Custom model identifiers with OpenAI for more information.
  • API key - Your API key for the selected provider. API keys are securely stored and not displayed after saving. Required for:
    • Anthropic Direct
    • OpenAI - Applicable to built-in models and custom models
    • Snowflake Cortex - Use a Snowflake Programmatic Access Token (PAT)
    • Databricks Genie - Use a Databricks Personal Access Token (PAT)
    • Grok
  • Base URL - The base URL for API requests. Available for:
    • OpenAI - Use this to connect to OpenAI-compatible APIs like Azure OpenAI or self-hosted Ollama instances
    • Snowflake Cortex - Provide your Snowflake account’s base URL
    • Databricks Genie - Provide your Databricks workspace URL (e.g., https://your-workspace.cloud.databricks.com)