Document Search V2


Referenze in Code

SearchInVectorDBV2

Functionality

This module is designed to answer a user query based on documents ingested into the knowledge center. The module will first create a search string from the user question, embed it, and then perform a semantic search or a full-text search in the VectorDB or the PostgreSQL DB. Finally, the module generates an answer for the user input based on retrieved internal knowledge, either referencing this knowledge with appropriate documents or stating that no information was found in the internal system.

Input

A user question related to information within the document database.

Example input:

  • "What is the guideline saying about travels to Europe?"

Output

An answer based on internal knowledge, either referencing the appropriate documents or stating that no information was found in the internal system.

Configuration settings (technical)

 

Example Configuration

{ "limit": 100, "maxTokens": 20000, "searchType": "COMBINED", "languageModel": "AZURE_GPT_4o_2024_0513", "chunkedSources": true, "evaluationConfig": { "title": "Hallucination-Level", "metricConfigs": [ { "name": "hallucination", "enabled": false, "scoreToEmoji": { "LOW": "🟢", "HIGH": "🔴", "MEDIUM": "🟡" }, "languageModel": "AZURE_GPT_4_0613" } ] }, "systemPromptSearch": "You are helping the employees with their questions. You will find below a question, some information sources and the past conversation (they are delimited with XML tags).\n\nAnswer the employee's question using ONLY facts from the sources or past conversation. Information helping the employee's question can also be added.\n\nIf not specified, format the answer using an introduction followed by a list of bullet points. The facts you add should ALWAYS help answering the question.\n\nSTRICTLY reference each fact you use. A fact is preferably referenced by ONLY ONE source e.g [sourceX]. If you use facts from past conversation, use [conversation] as a reference.\n\nHere is an example on how to reference sources (referenced facts must STRICTLY match the source number):\n- Some information retrieved from source N°X.[sourceX]\n- Some information retrieved from source N°Y and some information retrieved from source N°Z.[sourceY][sourceZ]\n- Some information retrieved from past conversation.[conversation]", "scopeToChatOnUpload": false, "triggerPromptSearch": "new_question:\n```\nUSER_MESSAGE\n```\n\nsources:\n```\nSEARCH_CONTEXT\n```\n\npast_conversation:\n```\n<conversation>HISTORY</conversation>\n```\n\nnew_question:\n```\nUSER_MESSAGE\n```\n\nAnswer concisely in LANGUAGE and ALWAYS reference each of your facts:", "systemPromptChatUpload": "You are helping the employees with their questions. You will find below my question and an uploaded document (delimited with XML tags).\n\nYour task is to assist me, an employee, by providing me responses to my question, based on PURELY the information available in the uploded document as your only information source.\nSTRICTLY reference each fact you use. Here is an example on how to reference used facts:\n###\n- Information retrieved from source X.[sourceX]\n- Information retrieved from source Y.[sourceY]\n###\n\nYou are reluctant of making any claims unless they are stated by the uploaded document or past conversation. If there is a situation where you cannot provide an answer based solely on the available sources, please inform me accordingly.\n\nIf the question is talking about 'it', 'this document' or 'the document', the question is refering to all content in the uploaded document.\nIf the question is asking about the content of the document (e.g. 'What is it about?', 'What is the content of this document?'), provide a concise summary of one or two paragraphs.", "triggerPromptChatUpload": "question:\n```\nUSER_MESSAGE\n```\n\nuploaded document:\n```\nSEARCH_CONTEXT\n```\n\nquestion:\n```\nUSER_MESSAGE\n```\n\nAnswer in LANGUAGE.\nAnswer using ONLY information from the uploaded document and ALWAYS reference each of your facts:", "chunkRelevancySortConfig": { "enabled": false, "languageModel": "AZURE_GPT_35_TURBO_0613", "fallbackLanguageModel": "AZURE_GPT_35_TURBO", "relevancyLevelOrder": { "low": 2, "high": 0, "medium": 1 }, "relevancyLevelsToConsider": [ "high", "medium", "low" ] }, "systemPromptSearchString": "Below is a history of the previous conversation and a question asked by the user (delimitated by XML tags). Follow these steps:\n\nStep 1: Translate the user question to english.\n\nStep 2: Verify if the new question relates with the previous conversation. If the new question does not relate then say for Step 2 '<not_a_follow_up>', otherwise say '<follow_up>'.\n\nStep 3: Generate a search query in English optimised for a vector database search by combining the english translation with relevant information from the previous conversation. The query must be a sentence, instruction or question and in English.\n\nStep 4: Output ALWAYS a JSON object structured like: {\"translation\": user question translated to english, \"relation\": <not_a_follow_up> or <follow_up>, \"search_query\": updated search query}\n\nExample:\n{\n\"translation\": \"How many live there?\",\n\"relation\": \"<follow_up>\",\n\"search_query\": \"How many Tweeka live in Columbia (South America)?\"\n}", "triggerPromptSearchString": "Previous conversation:\n```\nLAST_3_MESSAGES\n```\n\nUser question:\n<new_question>USER_MESSAGE</new_question>\n\nOutput in JSON format:" }

 

Note that GPT 4o is used above. It might not be available to all tenants and so should be replaced with an appropriate model when used (also when creating AI Module Template).

General parameters

Parameter

Description

Parameter

Description

languageModel: string

Specifies the language model used

Default: AZURE_GPT_35_TURBO_0613

searchType: string

Defines the type of search to be performed (VECTOR or COMBINED)

Default: COMBINED

maxTokens: number

Maximum number of tokens used by sources and previous conversation in the LLM call

scopeIds: [string]

Optional scope identifiers to limit the search

scopeToChatOnUpload: boolean

Indicates if the scope should be switched to the current chat upon file upload

Default: false → Scope switching off

chatOnly: boolean

If true, restricts search to files uploaded to chat (Irrespective of whether a file was uploaded or not)

chunkedSources: boolean

Indicates if chunks of the same document are appended as individual sources (true) to the LLM content or merged to one source (false)
We recommend setting this parameter to true for GPT-4 and false for GPT-3.5.

Default: false

historyIncluded: boolean

Flag that allows to include previous chat conversation in GPT-calls only if the new user input is a follow-up question (false)

Default: true → History always included

keyWordExtractionTemperature: number

Temperature setting for keyword extraction

Default: 0

evaluationConfig: object

Enable the evaluation of the generated assistant’s response for hallucination detection by defining the evaluationConfig object. Note: This feature requires at least GPT-4 and incurs additional token costs. To activate hallucination detection, configure the object as follows:

"evaluationConfig": { "title": "Hallucination-Level", "metricConfigs": [ { "name": "hallucination", "enabled": true, "scoreToEmoji": { "LOW": "🟢", "HIGH": "🔴", "MEDIUM": "🟡" }, "languageModel": "AZURE_GPT_4_0613" } ] }

chunkRelevancySort: object

Enable the sorting of retrieved chunks based on their relevance to the user input by defining the chunkRelevancySort object. Note: Activating this feature will incur additional token costs.

"chunkRelevancySortConfig": { "enabled": true, "relevancy_levels_to_consider": [ "high", "medium", "low" ], "language_model" : "AZURE_GPT_35_TURBO_0613", "fallback_language_model" :"AZURE_GPT_35_TURBO" }

Prompts

Only adjust prompts if you are fully familiar with the code logic. Small changes can break the module or reduce the output quality.

Parameter

Description

Parameter

Description

systemPromptSearch: string

triggerPromptSearch: string

System and trigger prompt used to interpret user input and form search queries

systemPromptChatUpload: string

triggerPromptChatUpload: string

System and trigger prompt used for chat upload scenarios

systemPromptSearchString: string

triggerPromptSearchString: string

System and trigger prompt used to extract the search string from the user question

Reserved keywords in Trigger Prompts

Reserved keywords are reserved and act as triggers. If used, they will be automatically replaced by the system.

 

Keyword

Purpose/Replaced BY

Keyword

Purpose/Replaced BY

USER_MESSAGE

The input user message

SEARCH_CONTEXT

Search results returned by the knowledge base/uploaded documents

HISTORY

Past messages of this chat

DATE_TIME

Current date and time in ISO format

LANGUAGE

Language of the user message

 

For example a trigger prompt can look like :

(Tool) Definition

 

 

 

 


Author

@Sadique Sheik

 

© 2024 Unique AG. All rights reserved. Privacy PolicyTerms of Service