How it works?

WorkWithPlus's AI module works by taking advantage of all the information we have from the KB (semantic model) to give the best possible context to GeneXus Enterprise AI (which will transfer it to the desired LLM). Based on what the LLM returns, the response will be processed by WorkWithPlus to answer to the user's requests & intentions.

AISaia

A typical flow is:

  1. Users will specify the expected output or query, for example, "List accounts with a balance of approximately $500,000 USD", "Total amount of ongoing projects", etc.
     
  2. The system will process this request and send it, along with a set of metadata called the "semantic model," to GeneXus Enterprise AI (is a new product of GeneXus that allows us to interact with LLMs (Language Model Models) like Chat GPT, Gemini, etc.). 
     
  3. This response will be processed by WorkWithPlus, and it will take the user to the desired screen or display the proper answer (charts, tables, KPIs, etc.).

WorkWithPlus will automatically generate all of this behaviour.

The semantic model

When we talk about a "semantic model", we're referring to the information from the KB that we already have. This includes:

  • entities names
  • domains
  • attribute names
  • applied filters
  • descriptions of those filters, etc.
  • security definitions

WorkWithPlus prompt

WorkWithPlus generates a prompt by combining this semantic model, the user request/intention, and additional prompt tuning to maximize accuracy. This prompt is then sent to GeneXus Enterprise AI, providing the necessary context for the LLM to ensure the response to the end-user is as accurate as possible.