/
AI Governance

AI Governance

Contents

1. Purpose

Our vision is to be the easiest and most inspiring way for finance teams to effectively collaborate with their clients in the era of GenAI.​ Our mission is to forge a synergistic alliance with leading financial institutions, creating a specialized FinanceGPT solution that embodies innovation, responsibility, and security at its core.​ In order to do this AI Governance is essential to ensure that the policy and processes are in place to ensure that our mission and vision are achieved in a responsible and ethical manner.

2. Why AI Governance?

“With the economic promise and opportunity that AI brings, comes great social responsibility. Leaders across countries and sectors must collaborate to ensure it is ethically and responsibly developed, deployed and adopted.” Jan 2024, AI Governance Alliance: Briefing Paper Series, World Economic Forum

stateofaI-ex-8.svgz

Risk management for GenAI is essential for value creation​. GenAI Risks that Organisations considered relevant in a 2024 Mckinsey survey included: Inaccuracy (63%), Cybersecurity (51%), Compliance (43%), Explainability (40%)​. Further, risk management has become increasingly important by mitigating data privacy, ensuring the protection intellectual property and minimising ethical risks associated with AI.

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value, Mckinsey 

More information about AI Governance can be found in the blogpost here.

2.1 Unique FinanceGPT: Harmonizing Value Creation and Risk Control

Screenshot 2024-10-28 at 17.02.27.png
  1. Unique FinanceGPT has built-in AI Governance to ensure appropriate risk management. ​

  2. AI Governance at Unique means our AI principles and their operationalization. It encompasses the implementation through processes, procedures, policies, and regulations to ensure that FinanceGPT aligns with our values. ​

  3. AI Governance principles are ultimately fully integrated into all our work including the FinanceGPT platform.​

3. Key Pillars of the Unique AI Governance Framework

Based on established principles like the following: BCG "6+1" Responsible AI Principles; Kurzweisung für KI-Einsatz, Vischer; Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines; FINMA's Supervisory Expectations for AI; The EU AI Act – A Point of View by Deloitte​, we have created our own Unique Responsible AI Principles. Each principle has been thoroughly operationalised and built in throughout the entire product. More information can be found here.

Within Unique and in close co-development with our clients, each of these pillars has been operationalized to ensure that they are not just principles on paper but that they are fully integrated into the entire product. It is important to note that for all AI Governance principles a Shared Responsibility Model is applied meaning that the client and Unique share responsibility with a varying degree regarding different operationalization strategies.  

4. Operationalization

4.1 Trust

Trust is essential for ensuring that the other pillars can be upheld as it helps to ensure the use of the technology by our customers but also boosts internal trust in the initiative which further improves the outcomes thereof.

4.2 Safety & Security

Safety & security ensures that financial services are within the legal guardrails set by regulators and further amplifies the trust in Unique FinanceGPT and specifically in our AI Governance Framework.

4.2.1 Organisational standards

4.2.2 Individual standards

  • End-User Terms and Conditions (T&Cs)​

  • Legal contracts

4.2.3 Product standards

4.3 Accountability

Accountability is essential for ensuring that each user has their rights and their obligations and allows for a more traceable trail of responsibilities. This strengthens each individuals’ sense of responsibility to the upholding of proper AI Governance and further strengthens the other principles and includes for example roles & responsibility definitions or definition of access rights. 

4.4 Reliability & Robustness

Reliability & Robustness encompasses the ability to review how well models are performing (comparing same use cases with different underlying LLMs as well as comparing across different use cases) and delivering the desired outputs. This allows for systematic reviews of and subsequent corrective measures for these models.

4.5 Explainability & Transparency

Explainability & Transparency means allowing for users to understand how the model reaches its outputs and ensuring that the model remains human-centered.

5. Unique's AI Governance Offering​

Unique Solution: The AI Governance framework is evaluated use case by use case and  can then be adapted, added on to and enhanced as need based on your organisation's needs.​

Unique Knowledge: With Unique you get a built-in and tailored solution from experts who are paving the way in AI governance in FSI.​

Unique Community: You are given access to knowledge on the topic at the forefront and be part of a community of like-minded people and organisations that make up our clientele and who we aim to bring together to share knowledge.​

6. Whitepapers

https://www.unique.ch/en/blog/what-is-ai-governance-about

https://www.unique.ch/en/blog/unique-ai-governance-framework

https://www.unique.ch/en/blog/ai-governance-whitepaper-series-part3

7. AI Governance Roundtable

Unique hosted its first AI Governance Roundtable in September 2024. This roundtable convened some of the largest private banks (EFG, Julius Bär, LGT, Pictet Group and more) and insurances (e.g. AXA, Zürich) in Switzerland as well as representatives of the State Secretariat for International Finance SIF and SIX Group. The discussions were built around the Unique FinanceGPT AI Governance Framework and how industry leaders were handling these different pillars in their organizations. A summary of the discussions can be found here.

The aim of such initiatives is to bring the industry together and exchange ideas and best practices around AI. We hope to continue to host such roundtables regularly.

8. AI Policy

As of November 2024, our AI Policy provides the outline for our internal use of AI. It can be read in full here:

 

This page is still a work in progress and will be gradually adjusted and extended.

 


Author

@Sina Wulfmeyer @Larissa Zutter

Date

Jan 20, 2025

Related content

EU AI Act: Unique's view
EU AI Act: Unique's view
More like this
Compliance Layer 2.0
Compliance Layer 2.0
More like this
IT Security
Read with this
Unique AI Policy
Unique AI Policy
More like this
Security, Compliance & Data Protection
Security, Compliance & Data Protection
Read with this
Secure Software Development Lifecycle
Secure Software Development Lifecycle
Read with this

© 2025 Unique AG. All rights reserved. Privacy PolicyTerms of Service