Excerpt | ||
---|---|---|
| ||
Overview of Unique FinanceGPT architecture and basic concepts. |
...
Table of Contents | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
Overview Architecture
Drawio | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
...
|
Unique components
Ingestion Service
...
The ingestion service is responsible for taking in files from various sources, such as web pages, SharePoint, or Atlassian products, and bringing them into the system. It handles different file types, including PDFs, Word documents, Excel files, PowerPoint presentations, text files, CSV files, and Markdown files. The service converts these files into Markdown format, preserving the structure and extracting important information like titles, subtitles, and tables. It then creates chunks out of these documents and saves them into a vector database and a Postgres database. The ingestion service also handles scalability and performance, as it needs to be able to handle large volumes of documents being ingested into the system.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
More details: Ingestion |
Ingestion Workers
The ingestion workers are responsible for processing different types of files that are ingested into the system. They take in files such as PDFs, Word documents, Excel files, PowerPoint presentations, text files, CSV files, and Markdown files. Each type of file needs to be processed in a different way to extract the necessary information. For example, in the case of PDFs, the ingestion workers need to extract titles, subtitles, tables, and other relevant information. The workers convert the files into Markdown format, which helps preserve the structure of the document and allows the models to better understand and generate results based on the content. The ingestion workers also create chunks out of the documents, which are then saved into a vector database and a Postgres database. This process allows for efficient storage and retrieval of the ingested documents.
...
Handles the organizational structure of an enterprise, allowing for the creation of departments, sub-departments, and user groups for access control.
The org Org structure service in the Unique financeGPT platform is responsible for creating and managing the organizational structure within an enterprise. It allows the platform to mimic the departments and access rights structure of the company. This service ensures that only authorized users have access to specific documents based on their roles and permissions within the organization. It helps define the scope of access for each user and enables better quality control and data security within the platform.
...
The chat service in the Unique financeGPT platform allows users to interact with the system by asking questions or making requests. It provides a chat interface where users can input their queries and receive responses from the system. The chat service also handles the retrieval of relevant documents from the knowledge center based on the user's query and presents the information in a conversational format. Additionally, the chat service keeps track of the chat history, including the prompts, responses, and any streamed information, and saves this data for auditing purposes. It also handles the theming of the chat interface to match the branding and colors colours of the organization using the platform.
...
The theme service is responsible for allowing users to customize the appearance of the Unique financeGPT platform according to their branding preferences. It enables users to set their own colorscolours, logos, and other visual elements to create a personalized and branded experience within the platform.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
More details: |
...
Anonymization Service
Ensures data privacy by anonymizing sensitive information in user prompts and de-anonymizing model responses.
The anonymization service in the Unique FinanceGPT platform is responsible for ensuring that sensitive information, such as customer identification data (CID), is not presented to the models during chat interactions. It works by taking the user's prompt, which may contain CID data or other sensitive tokens, and replacing those tokens with anonymized placeholders. The service then sends the anonymized prompt for processing, ensuring that the models do not have access to the original sensitive information. Once the models generate a response, the anonymization service replaces the anonymized tokens with the original sensitive tokens, allowing the user to receive the response without any impact on the anonymization process. This ensures that the chat can be conducted securely while protecting sensitive data.
Models
Utilizes various models, such as GPT-4, GPT-3.5 Turbo, LAMA or any other opensource models.
In the transcript, it is mentioned that the The Unique FinanceGPT platform can connect and use different models. The transcript states that the system can use It is capable of using models like GPT-4, GPT-3.5 Turbo, LAMALLaMA, and even open-source or even custom models. The connection to the models is done by sending the user's prompt, along with the relevant documents, to the chosen model. The system allows for flexibility in selecting the appropriate model for each assistant or prompt. It is also mentioned that multiple instances of the same model can be used , even in different data centers, to increase throughput and handle rate limits, even in different data centers. The system ensures a good user experience by automatically retrying if there are any issues with the models, using exponential backoffs.
...
The benchmarking enables the client to test prompts on a large scale to ensure a high quality (accuracy) of the output (answers) by automatically comparing answers to the ground truth and creating a score using LLMs and vector distance as well as detections of hallucinations to make sure data and model drift is detected early on.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
More details: Benchmarking |
Analytics
Analytics reports (e.g., user engagement) are available via API or also via Unique UI.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
More details:Analytics |
Video Explainer
Resources
UniqueFinance GPT Overview : Figma Link (only for internals not publicly available)
Document Governance
...
Owner
...
Tokenizers
A tokenizer is a crucial component that processes input text to be understood by the model. It segments text into tokens, which can be words, subwords, or characters. Each token is then matched with a unique integer from a pre-established vocabulary on which the model was trained. For words not in the vocabulary, the tokenizer uses special strategies, such as breaking them down into known subwords or using a placeholder for unknown tokens. Additionally, tokenizers may encode extra information like text format and token positions to aid the FinanceGPT's comprehension. Once tokenized, the sequence of integers is ready for the model to process. After FinanceGPT generates its output, a reverse process, known as detokenization, is used to convert the token IDs back into readable text.
Embedded data pipelines
This is a streamlined processes integrated within FinacneGPTs architecture that facilitate the seamless transformation of raw data into actionable insights. These pipelines are carefully designed to preprocess input text, manage data flow through the model's layers, and post-process the output to generate coherent and contextually appropriate responses. The pipelines handle tasks such as tokenization, embedding, attention mechanism management, etc.
Fine-tuning
Note |
---|
This feature is currently only available for the On Premise Tenant deployment model. |
FinanceGPT allows for further training on a specific dataset to adapt its knowledge and improve its performance on tasks relevant to that dataset. By fine-tuning FinanceGPT on a dataset that includes bilingual or multilingual financial texts, the model learns to translate domain-specific vocabulary more accurately. Furthermore, financial language is often nuanced and context-dependent. Fine-tuning helps the model grasp these subtleties in different languages, improving the quality of translation. Lastly, financial terms can have different meanings in different contexts. Fine-tuning on context-rich examples helps the model disambiguate terms more effectively during translation.
Fine-tuning shows significant improvements in RAG by honing the model's ability to fetch and integrate more accurate and contextually relevant data into its responses.
Unique can provide a dedicated API that allows developers from our clients to customize FinanceGPT for their specific tasks or datasets.
GenAI SDK
Unique offers an SDK specifically designed for FinanceGPT via an public API.
Panel | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Please read here more: Software Development Kit (SDK) |
Logs
Unique produces these types of logs:
Applicaiton logs (no CID data): Sent to stdout can be collected by log scrapers.
Auditlogs (includes prompts and CID data): Sent to encrypted and secured write only storage account. For compliance and investigation puprposes
DLP Logs (contains CID data): Available via API for Data Leakage pervention purposes and analysis.
Kubernetes logs: Available for collection via log scrapers
Monitoring:
Unique provides standart Prometheus metrics per service that can be collected.
...
Author |
---|