Skip to main content
When using OpenAI as your AI model provider, you can specify custom model identifiers to use models not listed in the default model dropdowns. This is useful for:
  • Using locally-hosted models through Ollama (e.g., llama3.1:70b)
  • Using Azure OpenAI deployments with custom names
  • Using newly-released models before they’re added to the default list
  • Using fine-tuned or specialized models

Requirements

To follow the steps in this guide, you’ll need:
  • Organization Admin permissions in Omni
  • An OpenAI API key

Configuration

Custom models default to a 400,000 token context window.
1
In Omni, navigate to Settings > AI > Model.
2
Select OpenAI as the Provider.
3
In the Query model or Text model dropdown, select Custom model identifier.
4
Enter your model identifier in the Custom model field that appears (e.g., llama3.1:70b, gpt-4-turbo-2024-04-09, or your Azure deployment name).
5
If your endpoint uses a different base URL (such as for Azure OpenAI or Ollama), configure the Base URL field.
6
Enter your OpenAI API key in the API key field.
7
Click Save.
After the setup is successfully completed, Omni will use the custom model for AI features. Try asking the AI Assistant a few questions to test the setup.