Overview Architecture
Unique components
Ingestion Service
Responsible for taking in files from various sources and converting them into Markdown format for storage and retrieval.
The ingestion service is responsible for taking in files from various sources, such as web pages, SharePoint, or Atlassian products, and bringing them into the system. It handles different file types, including PDFs, Word documents, Excel files, PowerPoint presentations, text files, CSV files, and Markdown files. The service converts these files into Markdown format, preserving the structure and extracting important information like titles, subtitles, and tables. It then creates chunks out of these documents and saves them into a vector database and a Postgres database. The ingestion service also handles scalability and performance, as it needs to be able to handle large volumes of documents being ingested into the system.
More details: Ingestion
Ingestion Workers
The ingestion workers are responsible for processing different types of files that are ingested into the system. They take in files such as PDFs, Word documents, Excel files, PowerPoint presentations, text files, CSV files, and Markdown files. Each type of file needs to be processed in a different way to extract the necessary information. For example, in the case of PDFs, the ingestion workers need to extract titles, subtitles, tables, and other relevant information. The workers convert the files into Markdown format, which helps preserve the structure of the document and allows the models to better understand and generate results based on the content. The ingestion workers also create chunks out of the documents, which are then saved into a vector database and a Postgres database. This process allows for efficient storage and retrieval of the ingested documents.
Postgres and VectorDB
In the Unique Finance GPT platform, Postgres and VectorDB are used to store and retrieve data. Postgres is used to store the markdown and metadata of the ingested documents. It is also used to save the history of user interactions and chat logs. The metadata stored in Postgres includes information about the scope of access to documents, allowing for access control based on user roles and permissions.
On the other hand, VectorDB is used to store the embeddings of the documents. Embeddings are numerical representations of the documents that capture their semantic meaning. By storing the embeddings in VectorDB, the system can perform efficient vector searches to retrieve relevant documents based on user queries. The vector search is combined with full-text search to provide high-quality search results.
Both Postgres and VectorDB play crucial roles in the architecture of the Unique Finance GPT platform, enabling secure storage and retrieval of documents and supporting various functionalities such as access control, search, and auditability.
Org Structure Service
Handles the organizational structure of an enterprise, allowing for the creation of departments, sub-departments, and user groups for access control.
The Org structure service in the Unique financeGPT platform is responsible for creating and managing the organizational structure within an enterprise. It allows the platform to mimic the departments and access rights structure of the company. This service ensures that only authorized users have access to specific documents based on their roles and permissions within the organization. It helps define the scope of access for each user and enables better quality control and data security within the platform.
Chat Service
Enables users to interact with the system and chat against ingested documents, supporting login, chat history storage, and chat assistants.
The chat service in the Unique financeGPT platform allows users to interact with the system by asking questions or making requests. It provides a chat interface where users can input their queries and receive responses from the system. The chat service also handles the retrieval of relevant documents from the knowledge center based on the user's query and presents the information in a conversational format. Additionally, the chat service keeps track of the chat history, including the prompts, responses, and any streamed information, and saves this data for auditing purposes. It also handles the theming of the chat interface to match the branding and colours of the organization using the platform.
Theme Service
Allows users to customize the appearance and branding of the chat interface to align with their organization's identity.
The theme service is responsible for allowing users to customize the appearance of the Unique financeGPT platform according to their branding preferences. It enables users to set their own colours, logos, and other visual elements to create a personalized and branded experience within the platform.
More details: Style Unique FinanceGPT to your Corporate Identity
Anonymization Service
Ensures data privacy by anonymizing sensitive information in user prompts and de-anonymizing model responses.
The anonymization service in the Unique FinanceGPT platform is responsible for ensuring that sensitive information, such as customer identification data (CID), is not presented to the models during chat interactions. It works by taking the user's prompt, which may contain CID data or other sensitive tokens, and replacing those tokens with anonymized placeholders. The service then sends the anonymized prompt for processing, ensuring that the models do not have access to the original sensitive information. Once the models generate a response, the anonymization service replaces the anonymized tokens with the original sensitive tokens, allowing the user to receive the response without any impact on the anonymization process. This ensures that the chat can be conducted securely while protecting sensitive data.
Models
The Unique FinanceGPT platform can connect and use different models. It is capable of using models like GPT-4, GPT-3.5 Turbo, LLaMA, and even open-source or custom models. The connection to the models is done by sending the user's prompt, along with the relevant documents, to the chosen model. The system allows for flexibility in selecting the appropriate model for each assistant or prompt. It is also mentioned that multiple instances of the same model can be used to increase throughput and handle rate limits, even in different data centers. The system ensures a good user experience by automatically retrying if there are any issues with the models, using exponential backoffs.
Embeddings
Any embedding model can be used usually Ada of OpenAI is used in the standard but it is not mandatory.
Audit Logs
Maintains comprehensive logs of system interactions, API calls, and user activities for security, accountability, and compliance purposes.
The audit log service is responsible for recording and storing all the relevant information about the interactions and activities that occur within the Unique FinanceGPT platform. It captures and logs various events such as API calls, responses, WebSocket streams, and other relevant information. The audit logs are written into a secure container that can only be accessed by authorized auditors. These logs provide a detailed record of the system's activities, allowing for security monitoring, compliance, and auditing purposes.
Benchmarking (Prompt Testing)
The benchmarking enables the client to test prompts on a large scale to ensure a high quality (accuracy) of the output (answers) by automatically comparing answers to the ground truth and creating a score using LLMs and vector distance as well as detections of hallucinations to make sure data and model drift is detected early on.
More details: Benchmarking
Analytics
Analytics reports (e.g., user engagement) are available via API or also via Unique UI.
More details: Analytics
Tokenizers
A tokenizer is a crucial component that processes input text to be understood by the model. It segments text into tokens, which can be words, subwords, or characters. Each token is then matched with a unique integer from a pre-established vocabulary on which the model was trained. For words not in the vocabulary, the tokenizer uses special strategies, such as breaking them down into known subwords or using a placeholder for unknown tokens. Additionally, tokenizers may encode extra information like text format and token positions to aid the FinanceGPT's comprehension. Once tokenized, the sequence of integers is ready for the model to process. After FinanceGPT generates its output, a reverse process, known as detokenization, is used to convert the token IDs back into readable text.
Embedded data pipelines
This is a streamlined processes integrated within FinacneGPTs architecture that facilitate the seamless transformation of raw data into actionable insights. These pipelines are carefully designed to preprocess input text, manage data flow through the model's layers, and post-process the output to generate coherent and contextually appropriate responses. The pipelines handle tasks such as tokenization, embedding, attention mechanism management, etc.
Fine-tuning
This feature is currently only available for the On Premise Tenant deployment model.
FinanceGPT allows for further training on a specific dataset to adapt its knowledge and improve its performance on tasks relevant to that dataset. By fine-tuning FinanceGPT on a dataset that includes bilingual or multilingual financial texts, the model learns to translate domain-specific vocabulary more accurately. Furthermore, financial language is often nuanced and context-dependent. Fine-tuning helps the model grasp these subtleties in different languages, improving the quality of translation. Lastly, financial terms can have different meanings in different contexts. Fine-tuning on context-rich examples helps the model disambiguate terms more effectively during translation.
Fine-tuning shows significant improvements in RAG by honing the model's ability to fetch and integrate more accurate and contextually relevant data into its responses.
Unique can provide a dedicated API that allows developers from our clients to customize FinanceGPT for their specific tasks or datasets.
GenAI SDK
Unique offers an SDK specifically designed for FinanceGPT via an public API.
Please read here more: https://unique-ch.atlassian.net/wiki/x/-wYFGg
Logs
Unique produces these types of logs:
Applicaiton logs (no CID data): Sent to stdout can be collected by log scrapers.
Auditlogs (includes prompts and CID data): Sent to encrypted and secured write only storage account. For compliance and investigation puprposes
DLP Logs (contains CID data): Available via API for Data Leakage pervention purposes and analysis.
Kubernetes logs: Available for collection via log scrapers
Monitoring:
Unique provides standart Prometheus metrics per service that can be collected.
Author |
---|