Security Considerations

Enterprises navigating the realm of AI often have concerns surrounding privacy and security. In this document, we explain how we ensure it.

On one hand, as we use GeneXus Enterprise AI, all the requests to the LLM are done ensuring data travels securely and adheres to business-imposed rules. You can learn more about this here.

On the other hand, the concept of a "semantic model" encapsulates essential information within the model's knowledge base (KB), such as table names, attribute names, and applied filters. WorkWithPlus strategically utilizes the KB's information to refine the prompts sent to GeneXus Enterprise AI in order to enhance response accuracy. In this information, WorkWithPlus doesn't include database information or sensitive data.

Example

Let's consider a real example. If a user enters the query "list appliance with the name Computer" into the search bar, this triggers two requests to the LLM:

  1. the first to determine which screen to go to (list, detail, a specific screen, etc.)
  2. the second to instantiate filters (if the screen to go to is a list).

Let's analyse both requests:

Request 1:

AI-request1

Request 2:

AI-request2

As seen in both requests, information from the KB is sent to the LLM (e.g., available filter types), but no actual database data is transmitted.

AI Search Examples

The only scenario in which data from the database is sent is to display some suggestions to the user, as shown here:

AI-example1

This is useful for users to gauge the functionality's power. However, it can be turned off with a simple property setting:

AI-property

Turning it off provides the same search functionality without displaying these "real" examples.