Your own Chat GPT assistant

Motivation & Goal

In certain scenarios, you may need your "assistant" to execute a particular task without retrieving any information from your knowledge base. Examples of these scenarios include:

  • Translating text

  • Python code generation

  • Creating Jira tickets

For the assistant to carry out these tasks effectively, it must receive an instruction, commonly referred to as a "system message".

Structure and Logic of an Assistant

An assistant as described above uses a single component: the Chat with GPT "module". Every user input activates this singular module. The module's capabilities are outlined within the system message, where you specify the tasks for the assistant to execute and the desired format for its responses.

Example of a system message for a Python code generation assistant:

You are an expert in generating Python code. The user input will be an instruction that needs to be translated into code.

Always add an explanation to the code section in your answer.

Note about chat history:

The module Chat with GPT takes into account the last 2 user-assistant interactions.

Step-by-step: How do you set up this assistant?

General instruction on how to setup a space →

1. Create new space

Create a new space

image-20240604-061428.png

2. Name and description

Define the name and description ( section "About" ) of the assistant. In addition, you can display a text to the user to mention that the answers are generated with AI.

image-20240604-061519.png

3. Configure AI Assistant

Add the module “Chat with GPT”.

4. Update System Message

Click on the pencil icon to replace the standard system message with the message intended for your specific use case. You can further exchange here the GPT model used, the temperature and number of chat interactions in the history.

  • languageModel: Available GPT-models and their names can be found here

  • temperature: This is a setting within OpenAI's ChatGPT that controls the level of creativity and originality of the outputs. This setting ranges from 0 to 1. At a temperature of 0, the replies are highly predictable, bordering on deterministic (implying that a particular prompt will typically elicit the same answer).
    Conversely, a temperature of 1 allows for a broad spectrum of variation in the responses.

  • maxHistoryInteraction: By default, the history for a new LLM call incorporates the last two interactions between the user and the assistant. This setup allows the system to recall the most recent pair of questions and answers. Adjusting this parameter can expand the length of history retained.

The tool definition can be left unchanged. (Does not need to be blanked out as shown in the picture below).

5. Suggested Prompts (optional)

Add a few suggested prompts that are displayed to the user when starting a chat

6. Members

Define which user will have access to this assistant by adding them to the list

7. Publish Assistant

Publish your assistant by pressing the Publish button. Your assistant will now be visible in the chat interface.

Note:

This module is not looking at any document uploaded into a chat or the knowledge centre, meaning that no RAG is done. Thus, do not activate the toggle Upload in Chat

Required and optional modules

The following modules are required/optional for this assistant:

Required

Optional

Required

Optional

 


Author

@Fabian Schläpfer

© 2024 Unique AG. All rights reserved. Privacy PolicyTerms of Service