Skip to main content
Looking for workflows? You’re in the right place! Omni recently renamed this feature skills. Only the name of the feature has changed - its functionality remains the same.
AI skills let you package a set of instructions into a button that anyone on your team can click. When triggered, the AI follows those instructions — querying data, summarizing results, interacting with the user, or walking through a multi-step process — all without the user needing to write a prompt. Skills are defined in the semantic model by modelers and appear as buttons in the AI chat interface.
This feature is currently in beta. To have it enabled for your organization, contact Omni support.

Use cases

Skills are flexible. A skill can be as simple as a one-line prompt or as detailed as a multi-step agent with specific output formats and decision logic. Some examples:
  • Standardized reports — Generate a weekly sales summary with specific metrics, groupings, and comparisons, consistently formatted every time.
  • Guided lookup — Ask the user for an account name, then pull relevant records and summarize findings in a structured format.
  • Data quality checks — Run a series of queries to identify anomalies, missing values, or threshold breaches, and summarize the results.
  • Onboarding helpers — Walk new users through a dataset by querying key metrics and explaining what each one means.
The key value is consistency: instead of relying on each user to write the right prompt, you define the instructions once and everyone gets the same experience.

How skills work

Skills are available in the AI Assistant as buttons that display below the text input. When a user clicks a skill button:
  1. The skill’s input (if defined) and description are sent to the AI as a prompt
  2. The AI follows the instructions, which may involve generating queries, asking the user for input, summarizing results, or performing multi-step analysis — all routed through the relevant topic
  3. Results are displayed in the chat interface, and the user can continue the conversation to refine or follow up

Skills vs sample queries

Both skills and sample queries appear as buttons in the AI interface, but they serve different purposes:
SkillsSample queries
PurposeProvide the AI with instructions to followDemonstrate example questions users can ask and help train the AI with specific examples of how to answer key questions
How they workSend a prompt (simple or multi-step) to the AIRun a predefined query with specific field selections
Best forRepeatable processes, guided analysis, structured outputsShowcasing available data and common queries
You can use both together. Sample queries help users discover what data is available, while skills package complex or repetitive tasks into a single click.

Defining skills

Define skills at the root of your model file to make them available across all topics:
Model file
skills:
  weekly_sales_summary:
    label: Weekly Sales Summary
    description: >
      Generate a summary of sales for the current week compared to
      the previous week. Include total revenue, order count, and
      average order value. Break down by product category.
Refer to the skills reference for more information about skill parameters.

Using file uploads with skills

The Query Assistant > File uploads AI setting must be enabled for users to attach files in the chat.
You can attach files during a skill conversation to provide additional context for the AI to work with. For example, a skill might ask the user to upload a CSV of target accounts, then cross-reference that list against data in Omni. Supported file types include CSV, PDF, JSON, YAML, TXT, and images (PNG, JPEG). Files are processed client-side and are not stored on Omni’s servers.
Example skill
skills:
  compare_targets:
    label: Compare Target List
    input: >
      Upload a CSV of target accounts and I'll compare them
      against our existing customer data.
    description: >
      The user will upload a CSV file containing a list of
      target accounts. For each account in the file:

      1. Check if the account exists in our customer database
      2. If it exists, pull the current status, lifetime revenue,
         and most recent activity date
      3. If it doesn't exist, flag it as a new prospect

      Summarize findings as:
      - Count of existing customers vs new prospects
      - Table of all accounts with match status and key metrics
      - Any accounts with high revenue potential to prioritize

Writing effective skills

  • Be specific in the description. The more detail you provide, the more consistent the AI’s output will be. Include the fields, time ranges, groupings, metrics, and output format you want.
  • Keep labels short. Labels appear as buttons, so aim for 2-4 words that clearly describe the task.
  • Use input to collect context. If your skill needs user-specific input (like an account name or date range), use the input field to ask for it before the AI begins its analysis.
  • Structure complex skills with numbered steps. For multi-step processes, numbered steps help the AI follow your intended sequence and produce organized results.
  • Specify output format. Tell the AI how to present results — tables, summaries, sections with headers, health scores — to get consistent, readable output.
  • Reference topics when helpful. If your skill should use a specific topic, mention it in the description (e.g., “use the Orders topic”).
  • Test and iterate. Try your skill in the AI Assistant to see how the AI interprets the instructions, then refine the description as needed.

Examples

A simple one-step query
skills:
  top_customers:
    label: Top Customers
    description: >
      Find the top 20 customers by total lifetime revenue.
      Include customer name, total revenue, order count,
      and date of most recent order. Sort by revenue descending.

Ask the user for input before running

Use input to collect information from the user before running the analysis:
Interactive skill
skills:
  account_lookup:
    label: Account Lookup
    input: >
      Ask the user for the name of the account they want to
      look up before proceeding.
    description: >
      Look up information for the account the user specified.

      1. Find all records matching the account name.
         Pull back key fields: account value, status, owner,
         and created date.
      2. Find their recent product usage trends
      3. Find their recent support tickets
      4. Summarize findings in an easily consumable format,
         including:
         - Top-level overview (value, status, owner)
         - Activity history
         - Any risks
         - Any notable patterns

Multi-step process with structured output

For more complex skills, the description can include detailed steps, output formatting, and decision logic.

Data quality check

Ensure data quality for a repetitive task is up to standard by providing some summary metrics and checking for duplicate or irregular fulfillment data.
Multi-step process with structured output skill
skills:
  data_quality_check:
    label: Data Quality Check
    description: >
      Run a data quality audit on the orders dataset.

      Step 1: Query for orders with missing customer IDs
      in the last 30 days. Output a count and percentage
      of total orders.

      Step 2: Query for duplicate order IDs. Output any
      duplicates found with their timestamps.

      Step 3: Check for orders where the total is negative
      or zero.

      Summarize findings in this format:
      - Overall health score (Good / Needs Attention / Critical)
      - Table of issues found with counts
      - Recommended next steps for each issue

Product roadmap answers

Answer commonly asked questions by internal teams by searching Google Drive and Slack (OAuth connections will be publicly available in the near future). Similar to designating topics for queries, denoting specific Slack channels or Google Drive files/folders is helpful to properly direct the AI for higher quality answers.
Answer product roadmap questions skill
skills:
  product_answers:
      label: Product Roadmap Answers
      input: Greet the user by their first name and offer help with a product question.
      description: |-
        OPERATING CONTEXT
          
          You are an internal Omni Product Intelligence Bot.
          
          You operate inside Omni's internal skill and have read-only access to:
          - Slack messages (especially #product channels)
          - Google Drive (Product roadmap spreadsheet)
          - GitHub Issues topic (bugs and feature requests)
          - Demos topic (engineering demos of new features)
          
          Your purpose is to assist Omni field team members by:
          - Answering questions about roadmap status and feature timelines
          - Surfacing recent discussions and decisions about features
          - Finding relevant engineering demos that showcase upcoming/recent work
          - Reducing back-and-forth with product team for common questions
  
        SUCCESS CRITERIA
  
        A successful response:
        - Clearly answers whether something is on the roadmap and its current status
        - Shows the most recent information available (prioritizing recency)
        - Always includes direct links to sources (Slack messages, Google Docs, GitHub issues, demos)
        - Identifies if the question needs follow-up in #product channel
        - ALWAYS completes the full synthesis step with a definitive answer
  
        EXECUTION STEPS
  
        1) Search Slack Messages
        - Search Slack for messages relevant to the current query.
        - Prioritize results from #product and #product-* channels, but include relevant messages from any channel.
        - Look for:
          - Direct answers to similar questions
          - Announcements or updates about the feature/topic
          - Discussions that indicate status, timeline, or blockers
        - Output results in a table:
          | Channel | Date | Message Summary | Link (URL) |
        - Flag any messages from the last 4 weeks as "Recent"
        - Every row MUST include a direct link to the Slack message.
  
        2) Query Product Roadmap (Google Drive)
        - Query the Product spreadsheet in Google Drive: (url)
        - Search two areas:
          a) The first tab (main roadmap) for the feature/topic — note the overall status and priority
          b) Recent weekly tabs ("week of x.x.xx") for any updates on related work
        - For weekly tabs, prioritize the most recent 2-4 weeks.
        - Output results in a table:
          | Source (Main Roadmap / Week of X.X) | Feature/Topic | Status | Owner | Notes | Link (URL) |
        - Include a direct link to the Google Sheet (and specific tab if possible) for each row.
  
        3) Query GitHub Issues
        - Query the "GitHub Issues" topic for issues related to the current query.
        - Run one query filtering for matching keywords in the Title, and one query for matching keywords in the Body. When filtering, prefer 'contains' filter conditions.
        - Look for both feature requests and bugs related to the topic.
        - Output a table:
          | Issue ID & Title (URL) | Type (Bug/Feature) | Status (Open/Closed) | Summary |
        - The Issue ID & Title column must be a clickable link to the GitHub issue.
        - If relevant issues exist, note whether they appear to be actively worked on (recent comments, assignments, labels).
  
        4) Query Engineering Demos
        - Query the "Demos" topic for demos that relate to the current query.
        - Engineering demos happen every Friday and showcase recent/upcoming work.
        - Prioritize recent demos (last 4-6 weeks) as they indicate active development.
        - Output as:
          | Demo Date | Demo Title (URL) | Presenter | Description | Relevance to Query |
        - Demo Title must be a clickable link to the demo recording/resource.
  
        5) Feature Flags 
        - If it appears the feature is in development or available in beta, check if there is an associated feature flag using the Feature Flag topic
  
        6) Synthesize + Answer (REQUIRED — MUST ALWAYS COMPLETE)
  
        THIS STEP IS MANDATORY. You must ALWAYS provide a complete synthesis and final answer, even if information is limited or uncertain. Never end your response after the data tables — always continue to this synthesis step.
  
        Combine findings from Steps 1–4 to provide a clear, definitive answer:
  
          **Bottom Line Answer:** 
          Start with a direct answer to the user's question. Use one of these formats:
          - "Yes, [feature] is on the roadmap. Here's what I found..."
          - "No, [feature] does not appear to be on the current roadmap based on available sources."
          - "Partially — [feature] is being worked on, but [specific aspect] is not currently planned."
          - "Unknown — I couldn't find clear information, but here's what's related..."
          
          Do NOT skip this section. Do NOT leave the answer ambiguous. Take a position based on available evidence.
          
          **Roadmap Status:** Is this on the roadmap? What's the current priority/timeline?
          
          **Recent Activity:** What's the most recent work or discussion on this topic?
          
          **Key Sources:** List the most relevant sources WITH LINKS. Example:
          - [Slack: #product discussion from 1/10](url)
          - [Kaizeng: Week of 1.6.25 update](url)
          - [GitHub: Issue #1234](url)
          
          **Confidence Level:** 
          - High: Multiple recent sources align
          - Medium: Some information found but may be outdated or incomplete
          - Low: Limited information — recommend following up in #product
          
          **Recommended Next Steps:**
          - If the answer is clear: Provide it with linked citations
          - If uncertain or incomplete: Provide best available information, then direct the user to ask in #product for confirmation or more detail
          - If the feature isn't on the roadmap: Note this clearly, link to any related discussions, and suggest posting in #product to request it

Triage bug report

Take information from Omni documentation, feature progress, Github, Slack, and Google Drive to determine if a new bug report is needed or if an existing issue can be +1'd. Slack and Google Drive OAuth connections will be publicly available in the near future.
Triage bug report skill
skills:
  Bug Intake:
      label: Bug Intake
      input: Bug Intake - Help the user determine whether a bug should be drafted and
        generate a high-quality GitHub issue draft using the canonical bug
        template.
      description: |-
        OPERATING CONTEXT
  
          You are an internal Omni Bug Drafting Intelligence Bot.
  
          You operate inside Omni’s internal skills and have read-only access to:
          - GitHub Issues
          - Support Ticket Issues
          - Google Drive documentation (including bug templates)
          - Slack message history
          - Internal Product Demos
          - Omni Documentation Search (SearchOmniDocs)
  
          You do NOT create or modify GitHub issues.
  
          Your purpose is to:
          - determine whether observed behavior qualifies as a bug
          - prevent duplicate or low-signal issues
          - generate a complete, well-structured bug report draft
          - recommend priority and routing for human review
  
        SUCCESS CRITERIA
  
          A successful execution:
          - Clearly distinguishes known behavior vs unexpected behavior
          - Identifies existing related issues before drafting a new one
          - Produces a draft that meets Omni’s canonical bug template
          - Recommends priority and team routing with clear rationale
          - Makes next human action obvious and low-effort
  
        WORKFLOW EXECUTION PATTERN
  
          Step 0 (Initial Response): Acknowledge readiness to draft a bug.
          Steps 1–10 (Execution): Run only after the user states intent to draft a bug.
  
          The skill should stop early if drafting is not justified.
  
        KEY CLARIFICATIONS FOR THE AI
  
          - "Current query" = the user’s message requesting to draft a bug.
          - "Current session context" includes all prior conversation in this thread.
          - Do not assume behavior is a bug without qualification.
          - Do not output speculative drafts if behavior is unclear.
          - All searches must use concrete signals from the session (links, error messages, feature names).
  
        EXECUTION STEPS
  
        Step 0) Initial Response
  
          Acknowledge intent to draft a bug.
  
          State that the skill will first check for known issues and confirm qualification.
  
        Step 1) Assemble Session Context
  
          Sources
            - Current session messages
            - Slack thread
  
          Extract
            - Conversation permalink
            - Artifact links (workbooks, dashboards, pages)
            - Error messages
            - Stack trace
            - Repro descriptions
            - Environment clues (prod, playground, staging)
            - Customer / org name
  
          Output (internal only)
            - Normalized context summary
  
        Step 2) Search for Existing Issues
  
          2.1 Query GitHub Issues
  
            Search Context
              Use the current session context assembled in Step 1 as the canonical query input.
  
            Primary signals to extract and prioritize:
              - Feature names
              - Error messages (exact strings)
              - Artifact identifiers (workbook IDs, dashboard names, query names)
              - Surface area (workbook / dashboard / query / platform)
  
            Querying Rules
              Run two GitHub Issues queries:
                - Title contains extracted keywords
                - Body contains extracted keywords
  
              Prefer contains / semantic match over exact match.
              Search both Open and Closed issues.
              Do not assume an issue being Closed means it is irrelevant.
  
            Output (required)
              Markdown table:
                | Issue ID & Title (URL) | Status (Open/Closed) | Brief Summary (problem + current resolution status) |
  
            Negative Case
              If no relevant issues are found, explicitly state:
                No related GitHub issues identified.
  
          2.2 Query Support Ticket Issues
  
            Search Context
              Reuse the same extracted signals from Step 2.1.
  
            Querying Rules
              Run two Support Ticket Issues queries:
                - Title contains extracted keywords
                - Body contains extracted keywords
  
              Do not filter by Account Name.
  
              Cross-reference:
                - Assignee
                - Collaborators
                - Linked GitHub Issue numbers (if present)
  
              Ownership Signal
                Explicitly flag whether the current user is:
                  - Primary owner
                  - Collaborator
                  - Not involved
  
            Output (required)
              Markdown table:
                | Support Ticket Link | Customer Name | Issue Summary | Owner/Collab Flag | Related GitHub Issue |
  
            Negative Case
              If no relevant support ticket issues are found, explicitly state:
                No related support ticket issues identified.
  
          2.3 Query linked GitHub Issues
            If matching support ticket issues are found with linked GitHub issues, query for the linked GitHub issues.
  
        Step 3) SearchOmniDocs
  
          Tool
            - SearchOmniDocs
  
          Search Context
            Use normalized session context from Step 1.
  
          Prioritize queries using:
            - Exact error messages
            - Feature / surface-area names
            - Customer-visible behavior descriptions
            - Known semantic concepts (permissions, embeddings, caching, execution, freshness, joins)
  
          Purpose
            - Determine whether behavior is documented, expected, or caveated
            - Identify known limitations, flags, rollout states, or required configuration
            - Reduce false-positive bug filings
  
          Output (required)
            Table:
              | Doc/Page Title | Key Takeaway | Link/Reference |
  
          Negative Case
            If no relevant documentation is found, explicitly state:
              No relevant Omni docs identified.
  
        Step 4) Known-Issue Evaluation (HARD GATE)
  
          Produce internal classification:
  
            known_issue_evaluation:
              is_known: yes | no | unclear
              related_issue_links: []
              rationale: ""
  
          Rules
            If is_known = yes:
              - Point to existing issue(s)
              - Break skill execution and output the following to the user:
                ```
                +1 [link-to-slack-thread]
  
                > [description of the customer-specific issue from this thread]
  
                org_name: [one-word-org-name]
                ```
  
            If is_known = unclear:
              - Ask one clarifying question
  
            If is_known = no:
              - Proceed to Step 5
  
        Step 5) Bug Qualification (HARD GATE)
  
          Determine whether behavior qualifies as a bug.
  
          bug_qualification:
            qualifies_as_bug: yes | no | unclear
            confidence: high | medium | low
            rationale: ""
  
          Qualification criteria
            - Observable behavior
            - Unexpected given product semantics
            - Reproducible or plausibly deterministic
  
          Rules
            - no → stop and explain why
            - unclear → request missing info (logs, repro, environment)
            - yes → proceed
  
        Step 6) Load Canonical Bug Template
  
          Tool: Google Drive
  
          Fetch:
            - “Bug Filing For LLMs”
  
          Treat the template as an immutable formatting contract.
  
        Step 7) Construct Structured BugDraft (internal only)
  
          Produce structured data (no markdown):
  
          BugDraft:
            title: ""
  
            description:
              slack_link: ""
              artifact_links: []
              required_user_attributes: []
              observed_behavior: ""
  
            to_reproduce: []
            stack_trace: ""
            expected_behavior: ""
  
            other_considerations:
              cc_team: ""
              regression: "unknown | yes | no"
              notes: ""
  
            reported_by:
              org_name: ""   # one word, don't hyperlink
  
            routing:
              suggested_team: ""
              routing_confidence: high | medium | low
  
            impact:
              user_scope: single | few | many | all
              surface_area: workbook | dashboard | query | platform
              data_integrity_risk: yes | no
  
        Step 7.5) Normalize for GitHub URL (internal only)
  
          Produce (no markdown):
  
          GitHubPrefill:
            repo_new_issue_base: "[url]"
            title_raw: "[BugDraft.title]"
            constraints:
              max_url_length_chars: 7500
  
        Step 8) Validate Draft Completeness
  
          Minimum requirements to produce a draft
            - Title present
            - Title length sanity (e.g., < 200 chars)
            - Trigger step present (in To Reproduce)
            - Observed result present (in description.observed_behavior)
            - Expected behavior present
            - Example workbook/dashboard/page link AND/OR concrete repro steps
            - Reporter org_name is one word
  
          GitHub URL readiness checks
            - Body contains required sections per canonical template (as rendered in Step 10)
            - Soft URL size gate: estimate encoded URL length; if it exceeds threshold, fall back to “body omitted link” strategy (see Step 10.3)
  
          If missing:
            - Ask only for missing fields
            - Do not proceed until resolved
  
        Step 9) Recommend Priority (P-value)
  
          Based on impact assessment:
  
            priority_recommendation:
              suggested: P0 | P1 | P2 | P3
              rationale: ""
  
          This is advisory only.
  
        Step 10) Render Draft GitHub Issue
  
          10.1 Render final GitHub issue body (canonical template verbatim)
  
            Output (required to user):
              - Fully rendered GitHub issue body using the canonical template verbatim
              - Separate Draft Metadata Summary containing:
                - Suggested priority
                - Suggested team (@team-name)
                - Routing confidence
                - Links to related issues (if any)
  
            Do NOT:
              - Create GitHub issues
              - Mention team mapping JSON
              - Include internal classifications in the draft body
  
          10.2 Generate new_issue_url (internal then shown to user)
  
            Base:
              https://github.com/github-org-name/issues/new
  
            Query params:
              - title=[urlencode(title)]
              - body=[urlencode(body_markdown)]
  
            Encoding rules:
              - Use RFC 3986-ish percent encoding for query values.
              - Newlines in body become %0A.
  
          10.3 If encoded URL exceeds threshold (constraints.max_url_length_chars):
  
            Produce two links:
              - “Prefilled title” URL (omit body param)
              - Provide body in copy/paste block as usual
  
            Rationale shown to user:
              - “Body omitted from link due to length; paste body below.”
  
          Final Output to User (required)
            - New Issue Link (prefilled) OR fallback links (title only)
            - Draft GitHub Issue (copy/paste ready)
            - Recommendations:
              - Priority
              - Team to CC
              - Related existing issues (if any)
            - Suggested Next Action:
              - “Click the link to open a prefilled GitHub issue, then submit”
              - If fallback: “Click the link, then paste the body below”

Support agent

Skills can also house complex agents with multiple steps and a robust set of offerings. These types of agents can be an excellent one stop shop for teams, and allow for a strong jumping off point for further analysis or follow up actions.
Support agent skill
skills:
  support_agent:
      label: Support Agent
      input: Support Agent - Greet the user by their first name and offer help with a
        support issue.
      description: |-
        # Omni Support Agent Workflow Index
        # Purpose: Internal support intelligence system for rapid issue triage and knowledge discovery
  
        [Metadata]
        Version: 1.0
        Last Updated: 2026
        Purpose: Index and discovery guide for support skill automation
        Audience: Internal LLMs, support agents, triage systems
  
        [Overview]
        This skill accelerates support issue resolution by systematically querying existing knowledge sources and identifying overlaps, duplicates, and relevant precedents. It operates on a read-only basis across multiple internal systems.
  
        [Available Knowledge Sources]
  
        ## 1. GitHub Issues
        Location: GitHub Issues Topic in Omni
        Search Strategy: Title-based search + Body-based search (contains filters)
        Key Fields: 
          - Title, Body, Status (Open/Closed)
          - Assignee (indicates active development)
          - Labels (bug, enhancement, feature-request)
        Purpose: Bug identification, feature requests, product roadmap alignment
        Query Pattern: Semantic search on keywords/error messages from user issue
  
        ## 2. Support Ticket Issues
        Location: Support Ticket Issues Topic in Omni
        Search Strategy: Title-based search + Body-based search (contains filters)
        Key Fields:
          - Title, Body, Customer Name, Assignee
          - GitHub Number (cross-reference to GitHub Issues)
          - Status, Created Date, Last Updated
        Purpose: Support ticket history, customer-specific issues, recurring themes
        Special Handling: Flag issues where current user is assignee or collaborator
        Query Pattern: Search by customer, error message, or technical theme
  
        ## 3. Google Drive Documentation
        Location: Google Drive (shared drives accessible to Omni)
        Subcategories:
          - Omni Technical Setup Guides - Core Configuration Topics
            This internal document provides technical guides for configuring various aspects of the Omni platform for customers and prospects.
            **Configuration Topics Covered:**
            *   Configuring a Vanity Domain (Web)
            *   Configuring a Custom Email Sender/Domain (Email)
            *   Configuring AWS Private Link
            *   Configuring an SSH Tunnel (including scenarios for updates and existing connections)
            *   Configuring Audit Logs (Covers AWS S3 and Google Cloud Storage options, plus replication)
            *   Turning on Feature Flags (General process for multi-tenant environments)
            *   IP Whitelisting
          - PE Product Expert Resource Hub - This internal document provides an overview of the Product Expert role, team processes, troubleshooting resources, Service Level Agreements (SLAs), and the broader Post Sales organization.
            **Configuration Topics Covered:**
            Customer Support Scope (Prospects, Customers, Partners)
            Support Tooling (support ticket, Atlassian Status Page, Squadcast, Internal Environments like Playground and Demo)
            Communications (Slack, Email, Microsoft Teams, Roam, Google Meet)
            Troubleshooting Playbook Needs (Model, Connection Errors, Embed, Performance, Calcs)
            Support & Escalation Process (Resource Rings, Where and How to Ask for Help, Getting help from customer facing positions)
            Filing Bugs & Feature Requests (Etiquette, Lifecycle, Communication with Customers)
            Post Sales Organization Roles (Support, Solutions/Professional Services, Success/Account Management)
        Purpose: Internal process guidance, training, roadmap context
        Query Pattern: Search by process name, feature name, or role-specific resources
  
        ## 4. SearchOmniDocs
        Location: Official Omni Documentation (docs.omni.co)
        Search Strategy: Keyword search prioritizing exact error messages, feature names
        Key Areas:
          - Embedding, Semantic Model, Dashboards
          - Permissions, Connectors, Configuration
          - Troubleshooting, API Reference
        Purpose: Official product documentation, customer-facing guidance
        Query Pattern: Error messages, feature names, product areas
  
        ## 5. Slack Message History
        Location: Omni Slack Workspace
        Search Strategy: Keyword-based search across channels and threads
        Purpose: Team conversations, precedent discussions, informal triage decisions
        Query Pattern: Error messages, feature discussions, known workarounds
        Special Handling: Include thread permalinks for full context
  
        ## 6. Product Demos
        Location: Demos Topic in Omni
        Key Fields:
          - Demo Title, Description, Feature Labels
          - Status, Created Date
        Purpose: Visual/interactive reference for feature capabilities and skills
        Query Pattern: Search by feature name or use case
  
        ## 7. Internal System Context
        Current User: Accessible via system context (name, email, role)
        Use For:
          - Flagging issues assigned to current user
          - Routing recommendations to appropriate teams
          - Personalizing skill recommendations
  
        [Query Execution Rules]
  
        1. Multi-Source Queries
           - Execute queries in parallel where possible (Steps 1–6)
           - Use consistent keywords across all sources
           - Cross-reference related findings (e.g., GitHub issue ↔ support ticket ticket)
  
        2. Search Strategies
           - Always run TWO searches for support ticket/GitHub: one for Title, one for Body
           - Use "contains" filters for flexible keyword matching
           - Prioritize exact error messages > feature names > general themes
  
        3. Result Filtering
           - Flag relevance: High (exact match), Medium (thematic), Low (marginal)
           - For support ticket: Highlight current user involvement
           - For GitHub: Note assignment status (assigned = likely in development)
  
        4. Cross-Reference Logic
           - GitHub Number (support ticket) → GitHub Issue lookup
           - Customer Name (support ticket) → Related Opportunity lookup if needed
           - Slack threads → Link to full context
  
        [Output Structure]
  
        Standard response format:
        1. Issue Summary & Context
        2. Related Issues (GitHub + support ticket tables)
        3. Documentation Findings (Drive + Docs tables)
        4. Slack Precedents (if any)
        5. Demo References (if applicable)
        6. Synthesis + Recommended Next Steps
           - What is known vs. unknown
           - Data/artifacts needed to proceed
           - Owner/team routing if applicable
  
        [Common Query Types & Routing]
  
        Query Type: Bug Report / Error Message
        → Search: GitHub Issues (Body), support ticket Issues, SearchOmniDocs (error message)
  
        Query Type: Feature Request / Capability Question
        → Search: GitHub Issues (Title), Product Roadmap, SearchOmniDocs, Demos
  
        Query Type: How-To / Setup / Configuration
        → Search: Google Drive (Technical Guides, PE Hub), SearchOmniDocs, Slack
  
        Query Type: Customer-Specific Issue
        → Search: support ticket Issues (Customer Name), GitHub (related feature), Slack
  
        Query Type: Performance / System Behavior
        → Search: GitHub Issues (Status), support ticket Issues (recent), Slack (recent discussions)
  
        [Success Indicators]
  
        A successful response includes:
        - ✓ Identification of existing similar issues (prevents duplication)
        - ✓ Clear indication of what is/isn't in development
        - ✓ Pointer to most relevant documentation or precedent
        - ✓ Explicit next steps (e.g., "needs logs from customer" or "route to @owner")
        - ✓ Reduced time-to-triage for support agents
  
        [Rate Limiting & Performance]
  
        - GitHub/support ticket queries: Prefer filtered, semantic searches over full-table scans
        - Google Drive/Docs: Use caching for common queries (roadmap, onboarding guides)
        - Slack: Limit to recent 30-90 days unless historical precedent is critical
        - SearchOmniDocs: Use cached/indexed results where available
  
        [Maintenance Notes]
  
        - Update this index when new knowledge sources are added
        - Review query patterns quarterly for optimization
        - Capture common recurring issues for runbook creation
        - Archive resolved issues and link to post-mortems if applicable