Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

An assistant and modules are created and modified via API call. This guide leads you through the calls you have to do and the settings and configurations that can be set.

Preferences

  • Token of a user / service-user which has an admin role assigned → <yourToken>

  • The base URL of your application → <baseUrl>!

To get the token and the URL check this: How to get a Token for our APIs

Assistant

Create a new assistant

This can be done with this cURL. Just replace the following placeholders:

  • <baseUrl> / <yourToken>

  • <assistantDefinition> → Definition of your assistant with corresponding modules (see example below)

curl --location --globoff 'https://gateway.<baseUrl>/chat/graphql' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <yourToken>' \
--data '{"query":"mutation CreateAssistant($input: AssistantCreateInput!) {\n    createAssistant(input: $input) {\n        id\n        name\n        fallbackModule\n        languageModel\n        chatUpload\n        settings\n        modules{\n            id\n            name\n            configuration\n            weight\n            isExternal\n            }\n    }\n}","variables":<assistantDefinition>}'

Update an assistant

To update some settings of an assistant, use the following cURL. Update these placeholders:

  • <baseUrl> / <yourToken>

  • <assistantId>

  • <updateAssistant> → contains variables that should be updated. Variables that are not contained in <updateAssistant> remain unchanged

curl --location --globoff 'https://gateway.<baseUrl>/chat/graphql' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <yourToken>' \
--data '{"query":"mutation UpdateAssistant($updateAssistantId: String!, $input: AssistantUpdateInput!) {\n  updateAssistant(id: $updateAssistantId, input: $input) {\n    name\n    id\n    languageModel\n    settings\n    chatUpload\n    modules {\n      id\n      name\n      configuration\n    }\n  }\n}","variables":{"updateAssistantId":"<assistantId>","input":{<updateAssistant>}}'

Delete an assistant

To delete an assistant, use the following cURL. Update these placeholders:

  • <baseUrl> / <yourToken>

  • <assistantId>

curl --location --globoff 'https://gateway.<baseUrl>/chat/graphql' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <YourToken>' \
--data '{"query":"mutation CreateAssistant($deleteAssistantId: String!) {\n  deleteAssistant(id: $deleteAssistantId) {\n    name\n  }\n}","variables":{"deleteAssistantId":"<assistantId>"}}'

Assistant variables and settings

The list contains various parameters that need to be set when an assistant is created/updated.

Variable

Description

Options

optional / required

name

Name of assistant

required

fallbackModule

fallbackModule if module selection could not find a suitable module

required

languageModel

Language module used for module selection

see below section with GPT-models

default: AZURE_GPT_35_TURBO

optional

chatUpload

Enable/disable upload of documents into chat

DISABLED (default)

ENABLED

optional

settings

Collection of various setting

showPdfHighlighting: Open references in same chat window on the right side. Referenced section is highlighted in the PDF

showPdfHighlighting

  • false (default)

  • true

optional

modelChoosing: type of module selection

modelChoosing

  • BY_FUNCTION_CALL

  • BY_PROMPT (default)

optional

isPinned: flag if a space is pinned (marked with a star). This is only relevant for spaces

isPinned

  • false (default)

  • true

optional

modules

List of modules to create. See Module dependent configurations for details about configuration of a module

required

Example input for an Internal Knowledge assistant containing two modules (SearchInVectorDB, Translate).

{
  "input": {
    "name": "Internal Knowledge",
    "fallbackModule": "SearchInVectorDB",
    "languageModel": "AZURE_GPT_35_TURBO_0613",
    "chatUpload": "Disabled",
    "settings": {
        "showPdfHighlighting": true,
        "modelChoosing": "BY_FUNCTION_CALL",
        "isPinned": true        
    },
    "modules": {
      "create": [
        {
            "name": "SearchInVectorDB",
            "configuration": {
            },
            "description": null,
            "isExternal": false,
            "weight": 10000
        },
        {
            "name": "Translate",
            "configuration": {
            },
            "description": null,
            "isExternal": false,
            "weight": 6000
        }
      ]
    }
  }
}

You can find more about assistants here: Assistants

Module

Create a module

This can be done with this cURL. Just replace the following placeholders:

  • <baseUrl> / <yourToken>

  • <assistantId> → ID of the assistant where the module should be created

  • <moduleDefinition> → Definition of the new module (see example below)

curl --location --globoff 'https://gateway.<baseUrl>/chat/graphql' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <yourToken>' \
--data '{"query":"mutation CreateModule($assistantId: String!, $input: ModuleCreateInput!) {\n  createModule(assistantId: $assistantId, input: $input) {\n    id\n    name\n    assistantId\n  }\n}","variables":{"assistantId":"<assistantId>","input":<moduleDefinition>}'

Update a module

This can be done with this cURL. Just replace the following placeholders:

  • <baseUrl> / <yourToken>

  • <moduleId>

  • <updatesModule> → New set of module parameters

curl --location --globoff 'https://gateway.<baseUrl>/chat/graphql' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <yourToken>' \
--data '{"query":"mutation UpdateModule($input: ModuleUpdateInput!, $moduleId: String!) {\n  updateModule(input: $input, moduleId: $moduleId) {\n    id\n    name\n    configuration\n    weight\n  }\n}","variables":{"input":{<updatesModule>},"moduleId":"<moduleId>"}}'

Delete a module

This can be done with this cURL. Just replace the following placeholders:

  • <baseUrl> / <yourToken>

  • <moduleId>

curl --location --globoff 'https://gateway.<baseUrl>/chat/graphql' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <yourToken>' \
--data '{"query":"mutation DeleteModule($moduleId: String!) {\n  deleteModule(moduleId: $moduleId)\n}","variables":{"moduleId":"<moduleId>"}}'

Module variables and configuration

Below is an example of parameters for a module:

  • configuration → depends on module. See section Module dependent configurations for more details

  • isExternal → flag if module is external (= developed with SDK)

  • weight → defines the order of modules in module selection prompt

  • toolDefinition → Description of module used for function calling. Here, the same structure as openAI is used (therefore, this parameter is called toolDefinition and not moduleDefinition)

{
    "name": "SearchInVectorDB",
    "configuration": {
      <collectionOfParameters>
    },
    "isExternal": false,
    "weight": 10000,
    "toolDefinition": {
        "type": "function",
        "function": {
            "name": "SearchInVectorDB",
            "description": "Search information in the employee knowledge base for a specific question or assignment, e.g. explain, elaborate or describe. If the employee mentions a specific document and you do not know it, ALWAYS use this function. The employee can ask specific formats. Some examples: 'summarise directive 76 in bullet points', 'how to export data to third-parties', 'what are employee benefits'.",
            "parameters": {
                "type": "object",
                "properties": {
                    "instruction": {
                        "type": "string",
                        "description": "The question to search in the knowledge base, e.g. Was gibt es zu essen?"
                    }
                }
            }
        }
    }
}

Module dependent configurations

Module

Example

Description

Parameter

Options

SearchInVectorDB

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106",
        "scopeIds": ["scope_1", "scope_2"],
        "searchType": "COMBINED",
        "chunkedSources": true, 
        "scopeToChatOnUpload": true,
        "historyIncluded": false,
        "maxTokens": 7000,
        "ftsSearchLanguage": "english"
}

GPT model to be used

languageModel: string

see below section with GPT-models

default: AZURE_GPT_35_TURBO: GPT-35-turbo (0301)

Scopes that the module can access

scopeIds: object

RAG approach to search for chunks

searchType: string

VECTOR: semantic search (similarity of input and chunk embeddings)

COMBINED: hybrid search combining vector and fulltext search

Describing if chunks of same document are appended as individual sources to GPT content or merged to one source

chunkedSources: boolean

true: each chunk is a separate source

false: chunks from the same document are merged to one source

Scope restriction to documents that are uploaded. If no documents are uploaded, then scopes in scopeIds are relevant.

scopeToChatUpload: boolean

true: Scope restriction on (if document uploaded)

false: Scope restriction off

Flag that allows to include previous chat conversation in GPT-calls only if the new user input is a follow-up question

historyIncluded: boolean

true: History always included

false: History only included for follow-up questions

Max tokens used by sources and previous conversation

maxTokens: integer

Default value depends on the used languageModel

AZURE_GPT_35_TURBO: 3000

AZURE_GPT_35_TURBO_0613: 3000

AZURE_GPT_35_TURBO_16K: 14000

AZURE_GPT_4_0613: 7000

AZURE_GPT_4_32K_0613: 30000

AZURE_GPT_4_VISION_PREVIEW: 7000

AZURE_GPT_4_TURBO_1106: 7000

AZURE_GPT_4_TURBO_2024_0409: 7000

Specifies the primary language used for full-text search. This should match the predominant language of the documents in the knowledge centre.

ftsSearchLanguage: string

Default: english

ContextMemorySearch

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106",
        "chunkedSources": true
}

GPT model to be used

languageModel: string

Describing if chunks of same document are appended as individual sources to GPT content or merged to one source

chunkedSources: boolean

true: each chunk is a separate source

false: chunks from the same document are merged to one source

DocumentSummarizer

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106"
}

GPT model to be used

languageModel: string

EmailWriter

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106"
}

GPT model to be used

languageModel: string

ExternalKnowledge

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106",
        "temperature": 0.5,
        "systemPromptExternalKnowledge": "Example system prompt",
        "maxHistoryInteraction": 2
}

GPT model to be used

languageModel: string

Temperature (chatGPT)

temperature: number

Range: 0-1

Default: 0.5

System prompt

systemPromptExternalKnowledge: string

Default system prompt is (depending of languageModel):

You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\nKnowledge cutoff: 2023-04.\nCurrent date: DAYDATE.

Maximum number of user-assistant interactions taken into account in the history.

maxHistoryInteraction: number

Default: 2

InvestmentResearchDocuments

InvestmentResearchTable

LunchSearchV4

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106"
}

GPT model to be used

languageModel: string

QueryTable

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106",
        "tableConfig": TableConfig[],
        "searchExamples": ChatCompletionRequestMessage[],
        "showTableReference": boolean
        
}

GPT model to be used

languageModel: string

tbd

tableconfig:

tbd

searchExamples:

tbd

showTableReference: boolean

TranscriptInteraction

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106",
        "scopeId": "scope_1",
        "maxTokens": 7000,
        "templateName": "template.xlsx"
        
}

GPT model to be used

languageModel: string

Scope that the module can access

scopeId: string

Max tokens used by sources and previous conversation

maxTokens: integer

Name of excel template file, that will be filled with extracted values. Need to uploaded to the same scopeId

templateName: string

Translate

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106"
}

GPT model to be used

languageModel: string

WhatsappWriter

"configuration": {
        "languageModel": "AZURE_GPT_4_TURBO_1106"
}

GPT model to be used

languageModel: string

Available GPT models

The list below contains available GPT-models with the corresponding name that has to be used in the configurations for the assistants and modules:

Model

Key

GPT-35-turbo (0301)

AZURE_GPT_35_TURBO

GPT-35-turbo (0613)

AZURE_GPT_35_TURBO_0613

GPT-35-turbo-16K (0613)

AZURE_GPT_35_TURBO_16K

GPT-4 (0613)

AZURE_GPT_4_0613

GPT-4-32K (0613)

AZURE_GPT_4_32K_0613

GPT-4-turbo (0409)

AZURE_GPT_4_TURBO_2024_0409

GPT versions available in Preview mode (not recommended to be used in productive applications):

Model

Key

GPT-4-turbo (1106)

AZURE_GPT_4_TURBO_1106

You can find more about modules here: Modules


 

  • No labels