Skip to main content
If you’re building a chat agent that serves your users, you can connect it to Omni’s AI so that each user’s queries are scoped to their Omni permissions. Omni provides two integration paths:
  • REST API (recommended) — Submit prompts, poll for status, and retrieve results using Omni’s AI Jobs API. This gives you full control over the request lifecycle and is the easiest path to integrate into most backend architectures.
  • MCP Server — Connect an MCP-compatible agent to Omni’s MCP endpoint. This is a good fit if your agent framework already supports the Model Context Protocol. Managing the user authentication can be trickier.
Both approaches support the userId parameter, which is required for embed to ensure queries run with the correct user’s permissions.

Prerequisites

  • An Omni API key with organization-level access
  • Embed users provisioned in your Omni instance
  • A model ID for the model your chat agent will query against
  • For MCP: the MCP server AI setting enabled for your organization

Get the embed user ID

Each embed user in Omni has a unique user ID. Your chat agent needs this ID to run queries with that user’s permissions. The List embed users endpoint depends on the embed user already existing in Omni. By default, embed users are created when they first authenticate into an iframe — but your chat agent may need to look up a user before they’ve ever loaded an iframe. To avoid this dependency, use the 2-step SSO login flow to pre-create embed users. The generate-session endpoint upserts the embed user, creating them if they don’t already exist:
curl -X POST "https://<YOUR_OMNI_INSTANCE>/api/unstable/embed/sso/generate-session" \
  -H "Authorization: Bearer <OMNI_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "contentPath": "/my",
    "externalId": "<EXTERNAL_ID>",
    "name": "<USER_NAME>"
  }'
This ensures the embed user exists in Omni. You can then retrieve their Omni user ID using the List embed users endpoint:
curl -X GET "https://<YOUR_OMNI_INSTANCE>/api/v0/users/embed" \
  -H "Authorization: Bearer <OMNI_API_KEY>"
The response includes the id for each embed user. You’ll pass this value as the userId query parameter (REST API) or the X-MCP-User-ID header (MCP Server).
Do not hardcode a single user ID. In a production embed setup, pre-create the embed user via 2-step SSO, and then retrieve the appropriate embed user ID at the start of each session using the List embed users endpoint.

Option 1: REST API (recommended)

The AI Jobs API lets you submit a natural language prompt, poll for completion, and retrieve the full result, all through standard HTTP requests. This is the recommended approach because it integrates into any backend without requiring MCP client support.

How it works

  1. Create a job — Submit a prompt with modelId and the embed user’s userId
  2. Poll for status — Check the job status every 2–5 seconds until it reaches a terminal state, or configure a webhook to be notified on completion
  3. Stream results — Retrieve the full result, including any queries the AI ran and the final answer

Step 1: Create an AI job

Use the Create AI job endpoint. Pass the embed user’s ID as the userId query parameter so the job runs with that user’s permissions:
curl -X POST "https://<YOUR_OMNI_INSTANCE>/api/v1/ai/jobs?userId=<EMBED_USER_ID>" \
  -H "Authorization: Bearer <OMNI_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "modelId": "<MODEL_ID>",
    "prompt": "What were total sales last quarter?"
  }'
The userId parameter only works with Organization API keys. It cannot be used with Personal Access Tokens (PATs).
The response includes a jobId and conversationId:
{
  "jobId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "conversationId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
  "omniChatUrl": "https://myorg.omni.co/chat/b2c3d4e5-f6a7-8901-bcde-f12345678901"
}
To send follow-up messages in the same conversation, pass the conversationId in subsequent requests:
curl -X POST "https://<YOUR_OMNI_INSTANCE>/api/v1/ai/jobs?userId=<EMBED_USER_ID>" \
  -H "Authorization: Bearer <OMNI_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "modelId": "<MODEL_ID>",
    "prompt": "Break that down by region",
    "conversationId": "b2c3d4e5-f6a7-8901-bcde-f12345678901"
  }'

Optional parameters

ParameterDescription
topicNameScope query generation to a specific topic. If omitted, the AI selects the best topic automatically.
branchIdQuery against an in-progress model branch instead of the shared model.
conversationIdContinue an existing conversation thread.
webhookUrlReceive a POST notification when the job completes instead of polling.

Step 2: Poll for job status

Use the Get AI job status endpoint to check progress. Poll every 2–5 seconds until the job reaches a terminal state (COMPLETE, FAILED, or CANCELLED):
curl -X GET "https://<YOUR_OMNI_INSTANCE>/api/v1/ai/jobs/<JOB_ID>" \
  -H "Authorization: Bearer <OMNI_API_KEY>"
A completed job response includes a resultSummary:
{
  "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "state": "COMPLETE",
  "resultSummary": "Total sales last quarter were $1.2M, up 15% from the previous quarter.",
  "conversationId": "b2c3d4e5-f6a7-8901-bcde-f12345678901",
  "userId": "9e8719d9-276a-4964-9395-a493189a247c",
  "omniChatUrl": "https://myorg.omni.co/chat/b2c3d4e5-f6a7-8901-bcde-f12345678901"
}
The resultSummary is a short markdown summary. For the full result including query details and data, use the Stream AI job results endpoint in the next step.

Step 3: Stream the full result

Once the job is complete, use the Stream AI job results endpoint to retrieve the full response, including all queries the AI ran and the final answer:
curl -X GET "https://<YOUR_OMNI_INSTANCE>/api/v1/ai/jobs/<JOB_ID>/result" \
  -H "Authorization: Bearer <OMNI_API_KEY>"
The response includes the AI’s actions and final message:
{
  "actions": [
    {
      "type": "generate_query",
      "message": "I'll query total sales for last quarter.",
      "timestamp": "2025-01-15T10:00:10.000Z",
      "result": {
        "queryName": "Total Sales Last Quarter",
        "status": "success",
        "query": { "fields": ["orders.quarter", "orders.total_sales"] },
        "csvResult": "Quarter,Total Sales\nQ4 2024,\"1,200,000.00\"",
        "totalRowCount": 1,
        "hasResults": true,
        "csvResultWasTruncated": false
      }
    }
  ],
  "message": "Total sales last quarter were $1.2M, up 15% from the previous quarter.",
  "topic": "order_items",
  "omniChatUrl": "https://myorg.omni.co/chat/b2c3d4e5-f6a7-8901-bcde-f12345678901"
}
The message field contains the AI’s final answer in markdown, which you can render directly in your chat interface. The actions array provides the underlying query details if you want to show the data behind the answer.

Cancel a job

If a user cancels a request in your chat interface, use the Cancel AI job endpoint to stop the job:
curl -X POST "https://<YOUR_OMNI_INSTANCE>/api/v1/ai/jobs/<JOB_ID>/cancel" \
  -H "Authorization: Bearer <OMNI_API_KEY>"

Using webhooks instead of polling

Instead of polling for status, you can pass a webhookUrl when creating the job. Omni will send a POST request to your URL when the job reaches a terminal state:
curl -X POST "https://<YOUR_OMNI_INSTANCE>/api/v1/ai/jobs?userId=<EMBED_USER_ID>" \
  -H "Authorization: Bearer <OMNI_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "modelId": "<MODEL_ID>",
    "prompt": "What were total sales last quarter?",
    "webhookUrl": "https://your-app.com/webhooks/omni",
    "webhookSigningSecret": "<YOUR_SIGNING_SECRET>"
  }'
The webhook payload includes the job status and a result summary:
{
  "event_id": "evt_550e8400_COMPLETE_1706123456",
  "jobId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "status": "COMPLETE",
  "result_summary": "Total sales last quarter were $1.2M",
  "completed_at": "2025-01-24T10:30:00Z"
}
To retrieve the full result with query data, call the Stream AI job results endpoint after receiving the webhook.

Example: end-to-end integration

Here’s a simplified example of how to wire this together in a backend service:
import requests
import time

OMNI_BASE_URL = "https://<YOUR_OMNI_INSTANCE>"
API_KEY = "<OMNI_API_KEY>"
HEADERS = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json",
}

def ask_omni(prompt, model_id, embed_user_id, conversation_id=None):
    """Submit a prompt to Omni AI on behalf of an embed user."""

    # Step 1: Create the job with the embed user's ID
    body = {"modelId": model_id, "prompt": prompt}
    if conversation_id:
        body["conversationId"] = conversation_id

    resp = requests.post(
        f"{OMNI_BASE_URL}/api/v1/ai/jobs",
        headers=HEADERS,
        params={"userId": embed_user_id},
        json=body,
    )
    resp.raise_for_status()
    job = resp.json()
    job_id = job["jobId"]

    # Step 2: Poll until the job completes
    while True:
        status_resp = requests.get(
            f"{OMNI_BASE_URL}/api/v1/ai/jobs/{job_id}",
            headers=HEADERS,
        )
        status_resp.raise_for_status()
        status = status_resp.json()

        if status["state"] in ("COMPLETE", "FAILED", "CANCELLED"):
            break
        time.sleep(3)

    if status["state"] != "COMPLETE":
        raise Exception(f"Job {status['state']}: {status.get('error', {}).get('message')}")

    # Step 3: Get the full result
    result_resp = requests.get(
        f"{OMNI_BASE_URL}/api/v1/ai/jobs/{job_id}/result",
        headers=HEADERS,
    )
    result_resp.raise_for_status()
    result = result_resp.json()

    return {
        "answer": result["message"],
        "conversation_id": job["conversationId"],
        "actions": result.get("actions", []),
    }

Option 2: MCP Server

If your agent framework supports the Model Context Protocol, you can connect it directly to Omni’s MCP endpoint. The MCP Server exposes tools that let your agent pick models, select topics, and query data through natural language.

Configure the MCP Server

Point your chat agent’s MCP client at your Omni instance. Set the X-MCP-User-ID header to the embed user’s ID so that all queries run with that user’s permissions:
MCP configuration
{
  "mcpServers": {
    "Omni": {
      "url": "https://<YOUR_OMNI_INSTANCE>/mcp/https",
      "headers": {
        "Authorization": "Bearer <OMNI_API_KEY>",
        "X-MCP-User-ID": "<EMBED_USER_ID>",
        "X-MCP-User-Required": "true",
        "X-MCP-Model-ID": "<OPTIONAL_MODEL_ID>",
        "X-MCP-Topic-Name": "<OPTIONAL_TOPIC_NAME>"
      }
    }
  }
}

Header reference

HeaderRequiredDescription
AuthorizationYesYour Omni API key, prefixed with Bearer
X-MCP-User-IDYesThe embed user’s Omni user ID. Queries run with this user’s permissions.
X-MCP-User-RequiredYesSet to true to enforce that a user ID is provided for every query
X-MCP-Model-IDNoScope the MCP Server to a specific model
X-MCP-Topic-NameNoScope the MCP Server to a specific topic

Dynamic user ID assignment

In a production embed setup, retrieve the embed user ID at the start of each session rather than hardcoding it:
  1. A user starts a chat session in your embedded application
  2. Your application identifies the user (from a session token or external ID)
  3. Your backend calls the List embed users endpoint to resolve the user’s Omni user ID
  4. Your backend passes the user ID in the X-MCP-User-ID header when connecting to the MCP Server

User permissions in embed

Both integration paths enforce the embed user’s Omni permissions on every query. This includes:
  • Access grants — Users only see data they’re authorized to access
  • Row-level security — Query results are filtered to the user’s allowed rows
  • Topic-level restrictions — Users can only query topics they have access to
This means you can expose a single chat interface to all embed users, and Omni handles permission scoping automatically based on the userId you provide.