EU AI Act: Unique's view

Purpose

This document describes how Unique deals with the EU AI Act.

The EU AI act has been unanimously adopted by the EU Ministers on May 21st, 2024 (official announcement here). The law was published in the official journal of the European Union on 13 June 2024. It will come into force 20 days after being published in the official journal. After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 12 months for general-purpose AI systems.

Link to full pdf document: pdf (europa.eu) May 14 Version or pdf (europa.eu) June 13 Version.

Besides the EU AI Act, Unique also adheres to principles of the FINMA Risk Monitor 2023 on the use of AI, namely: 1. Governance and accountability, 2. Robustness and reliability, 3. Transparency and explainability, 4. No unjustified unequal treatment.

Coverage

The adoption of the AI act is a significant milestone for the European Union. This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.

Mathieu Michel, Belgian secretary of state for digitisation, administrative simplification, privacy protection, and the building regulation

The EU AI Act applies to all public and private firms that serve the European Union as a market. Unique is also serving clients in the EU and hence, we also adhere to the European regulation. The legislation is pursuing a “horizontal approach” by creating one technology-focused regulation that covers the many impacts and use cases of AI, rather than creating tailored legislation for specific AI models or economic sectors.

Regulation

The EU AI Act classifies AI systems based on risk levels, imposing minimal transparency requirements on low-risk AI and strict obligations on high-risk AI for EU market access. AI systems posing unacceptable risks, such as cognitive behavioural manipulation and social scoring, are banned. Additionally, the law forbids AI for predictive policing based on profiling and the use of biometric data for categorizing individuals by race, religion, or sexual orientation.

The regulation divides AI systems into four categories:

  1. Unacceptable-risk AI systems

  2. High-risk AI systems

  3. Limited risk

  4. Minimal-risk AI systems

 

image-20240521-153539.png

 

Unique’s Assessment of the EU AI Act

A. Background

The EU AI Act is crucial for Unique as a SaaS provider in Switzerland because it establishes stringent regulations on AI systems that will impact Unique’s operations and compliance requirements. Like the GDPR, the EU AI Act has an extraterritorial effect, meaning that even non-EU companies, including those in Switzerland, must adhere to its standards when offering services within the EU. Ensuring compliance with the AIA's transparency obligations is essential to maintain market access and avoid potential legal repercussions, despite uncertainties around enforcement in Switzerland.

Unique is only involved in minimal and low-risk use cases (Unique Use Case Factory), they have opted to conduct a "light internal Conformity Assessment" following David Rosenthal's method (Link). Unique’s aim is to demonstrate their commitment to thoroughly analyzing our GenAI platform by investing time and resources.

B. Role of Unique (DEVELOPER)

The EU Act states that everybody who is developing an AI system will be considered as a “provider”. However, the definition of "developing" is unclear. For instance, does "development" include a company parameterizing a commercial AI product for its own use (e.g., setting system prompts), fine-tuning the model, or integrating it into another application (e.g., embedding chatbot software into its own website or app)?

If this were the case, then many users (incl. Unique) would become providers themselves, as the legal definition of provider also covers those who use AI System for their own purposes, provided that this use takes place in the EU as intended and under their own name. This very broad understanding of “developing” would at least consider as “development” those actions that go beyond prompting, parameterisation and the provision of other input. This means that fine-tuning of the model of an AI System would be deemed further developing the AI System, whereas the use of "Retrieval Augmented Generation" (RAG) would not.

For the moment (as legal uncertainty remains), Unique’s own view is that Unique is NOT developing an AI System and hence, not a provider but rather a distributor.

Link to Excel:

image-20240524-124042.png

C. Details on Procedure of Conformity Assessment

Unique performed a conformity assessment based on internal controls. This assessment has been done by the Chief Information Security Officer (CISO) and Chief Data Officer (CDO) in June 2024. Unique has measures in place to meet the transparency obligations of the EU AI Act. These include labeling all generated content as "AI-generated content", specifying terms and conditions for end users, and providing specific GenAI training with a focus on responsible AI.

Further resources: AI Governance , Certifications Compliance Layer 2.0

D. Summary

Unique has performed a light conformity assessment based on the method by David Rosenthal (Link). According to the results, Unique holds the role of distributor for an AI system.

Unique is not performing any of the activites listed under §6 of the EU AI Act and hence classifies as no high-risk AI system. This means that no special rules apply to the use cases developed by Unique (also in co-development together with clients).

We explicitly distance ourselves from the following use cases and we do not advise clients to implement them:

  • credit worthiness assessment

  • pricing and risk assessment for life and health insurances

E. Summary on a Use Case Level

Unique follows a risk-based approach when it comes to evaluation different use cases (also see, Use Case Factory (unique.ch)). For each use case, we decide on the risk category and implementation measures associated with them. This approach can also be done in co-development approach together with our clients.


Authors

@Sina Wulfmeyer

Authors

@Sina Wulfmeyer

 

© 2024 Unique AG. All rights reserved. Privacy PolicyTerms of Service