AI data privacy
Several features in Omni are powered by AI, allowing you to use natural language to ask questions about your data, get help creating queries, build calculations, and more.
Omni uses two LLM providers to power its AI features:
Any data, including metadata, provided to the LLMs used by Omni will not used for training.
OpenAI
Omni uses OpenAI to categorize user prompts, generate queries, and build workbook calculations.
User prompt categorization
When a user enters a prompt, Omni will first use OpenAI to categorize the request to determine how to respond. If the request is for data summarization, Omni will proceed using AWS Bedrock. Otherwise, Omni will use OpenAI and proceed according to the following sections.
Query generation
When generating a query, the output of the AI's processing is a new Omni query, which is a collection of metadata - including field names, filters, sorts, pivots, topic name, and limit - that is translated into SQL. This new query is then run within the current workbook to provide an answer to the user's prompt.
Because the AI uses an abstract query format and not SQL to create queries, the generated query respects the permissions set in Omni. This means it's not possible to access data outside of the topics, models, and connections a user has been restricted to in Omni.
Additionally, this approach ensures that no private, relational, or result set data is shared. Only metadata about the current query, user prompt, and selected topic fields is sent to OpenAI, thus maintaining the privacy and security of your data.
Note: While Omni hasn't opted to share data with OpenAI, Omni has also not specifically opted into OpenAI's zero day retention policy.
What's shared with OpenAI?
While relational data is never shared, Omni does share certain metadata with OpenAI to generate accurate responses. This metadata includes:
Description | What's shared? |
---|---|
Current query metadata | Information about the currently selected query in the workbook, such as field names, sorts, limits, pivots, and filters |
User prompt | The natural language question or prompt provided by the user. For example, "How many users signed up last month?" or "Filter by the last two years." |
Context (ai_context ) | Free text that provides context. This is set at the topic level using the ai_context parameter. |
Fields within the selected topic | Metadata about the fields in the currently selected topic, such as field names, labels, descriptions, data types, and whether they're aggregates (count , sum , average ) or dimensions. Note: Even if a user has access to multiple topics in a workbook, only metadata for the topic that's currently selected is accessed. |
AWS Bedrock
Omni uses AWS Bedrock's hosted Claude model for data summarization. This applies to the following:
- Asking the AI in the Query Helper to summarize the results of a query
- The AI summary visualization type
Regional deployment
Models are region-specific to ensure data privacy. For example, if you're based in the EU, the models you use will be deployed in an EU-based region to ensure your data never leaves your region.
What's shared with Bedrock?
To generate accurate summaries, the following is shared with AWS Bedrock:
- All data previously sent to OpenAI for the current query
- A CSV of the current query's results
All data will remain within AWS.
Feature reference
The following table details each of Omni's AI features and the LLM(s) it uses. Refer to the AI settings reference for information about enabling and disabling these features.
Name | OpenAI | AWS Bedrock |
---|---|---|
Workbook calculations | Yes ✓ | No X |
Workbook query helper | Yes ✓ | No X |
Blank workbook jumpstart | Yes ✓ | No X |
Visualization helper | Yes ✓ | No X |
IDE assistant | Yes ✓ | No X |
Data summarization | Yes* ✓ * For categorization only | Yes ✓ |
AI summary visualization | Yes* ✓ * For categorization only | Yes ✓ |