Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

All our promots and functionalities are working through Agents. In this document I will describe how to think about agents in the context of Unique and introduce a measure to express the Agency of the workflow.

Definitions:

  • Tool in OpenAI API: A tool is a higher-level construct that represents an external capability or resource that the model can access or use. In the context of the OpenAI API, tools are collections of defined functions that are made available to the model to assist in responding to a queries. Tools can represent anything from fetching data from a system to performing computations or updating UI elements. They act as the interface between the model and external systems. Currently only functions are supported as a tool.https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools

  • Function (deprecated)

Agentic what is it?

In the context of Large Language Models (LLMs), "agentic" refers to iterative workflows where the LLM is prompted multiple times to refine and improve its output incrementally, instead of generating the final output in a single pass. This approach aims to enhance the quality and accuracy of the results by mimicking human-like processes of revision and refinement.

In order to express how “Agentic” an agent is can be expressed in 4 dimensions according to Andrew Ng who explained that nicely in a talk:

https://www.youtube.com/watch?v=sal78ACtGTc

These 4 dimensions are:

  • Reflection:
    The ability to look at the produced result and check if the result is correct given all the available information. Like a human would reread a written text and revise it, to make the statements correct.

  • Tool use:
    The ability to choose and use different functionalities that are external to the llm, for example making use of a search, making use of an API or sending an email. Like a human would decide on what tool to use like using outlook to send an email.

  • Planning:
    The ability to crate subgoals to reach a given goal. Like a human would make a plan on how to reach a goal for example: Planning vacation, you have the subgoals to book a flight, book the hotel and so on.

  • Multi-agent colaboration:
    The ability to use multiple LLMs to achieve a certain task. Like humans in a company would be working together like the CEO the developers the designers all work in tandem to achieve a certain goal and contribute to it.

For our workflows we estimate the extent on a scale from 1-5 on advance the Agents are. That is how autonomous the agents can is in the given dimension.

One Simple but Powerfull example: Our Agentic RAG flow

When a user uses a space with our Document Search (RAG flow) in it and makes a prompt like: “What is our code of conduct about“ the following happends:

  1. Orchestrartor: The Agent decides on what of the tools given should be choosen in this case the Document Search

  2. Document Search: The Agent asks another agent (Prompt Rephrasor) to rephrase the question to better incorporate the history into it and makes sure the text is in the language of the underlying documents to produce better matches.

  3. Document Search: The Agents uses the document Search alongside the information of the user and the Prompt to then filter down the Documents to the correct ones and surfaces the information to another Agent (Librarian) that is able to assemble the information into an answer with References

  4. Document Search: The given information is taken by yet another Agent (Hallucinator Checker) to check if the answer given is hallucinated by checking the relevant resource and checking the information content on both.

  5. As an optional next step the agent (Answer Revisor) can decide based on the outcome of the hallucination check to make another attempt at answering the question. (not yet implemented)

Here a diagram that illustrates that process:

Here is how we would rate this agent on the multi-dimensional scale defined above:
Reflection: 3/5
Tool Use: 2/5
Planning: 1/5
Multi-Agent: 3/5

Why dont we give Agents not full autonomie yet?

Agents with LLMs are not yet at the level of humans in decision making and need guidance to produce usable results.
For example if you leave the Agents to fully autonomously plan and reflect on their plan they often get stuck in a loop that they would not be able to exit on them selfs.

Another example is if you give the LLM to many tools to choose from it is not good at making the right choice.

Agentic implementation

This workflow represents a structured and modular system designed to manage user inputs. The system operates in two main phases: module selection and function calling within a module, both aimed at ensuring that the user’s query or task is efficiently addressed using different internal resources and functions.

  1. Module Selection:

    • The Space Agent takes user input and selects one of several available modules. These modules represent different use cases (e.g., Internal Knowledge, Investment Assistant, Questionnaire, etc.).

    • Once a module is selected, the system transitions to the function-calling stage within a module.

  2. Function Calling within a module:

    • Inside a module (e.g., Internal Knowledge), specific functions are triggered based on user input or requirements.

    • Functions can include Knowledge Base Search, Table Search, Plot Generator, or Comparator.

    • The Module Agent determines which functions (if any) to call, based on its system state and the user's needs.

    • If no function is called, the result is streamed out and the process ends.

    • If one or more functions are called, their results (e.g., search results, plots, comparisons) are saved in the state/memory and appended to the history, allowing subsequent iterations to build upon previous outputs.

The module agent ensures that the appropriate functions are called within the loop until the required information is retrieved. However, the module selection phase does not allow looping to avoid unpredictable behavior.

Important points to consider:

  • Module agent system message is updated dynamically, depending on previous function calls and results. E.g. the information about referencing style is only appended if sources are found in previous function calls.

  • Modules need to be redesigned such that they include a look of function calling. Each functionality of the module needs to be abstracted into a function that is usable across different modules

  • Like this, we build up a collection of functions that can be activated/deactivated for new modules

  • No labels