- Using locally-hosted models through Ollama (e.g.,
llama3.1:70b) - Using Azure OpenAI deployments with custom names
- Using newly-released models before they’re added to the default list
- Using fine-tuned or specialized models
Requirements
To follow the steps in this guide, you’ll need:- Organization Admin permissions in Omni
- An OpenAI API key
Configuration
Custom models default to a 400,000 token context window.
Enter your model identifier in the Custom model field that appears (e.g.,
llama3.1:70b, gpt-4-turbo-2024-04-09, or your Azure deployment name).If your endpoint uses a different base URL (such as for Azure OpenAI or Ollama), configure the Base URL field.