While AI has become a prevalent part of everyday life, the reasoning behind a model’s decisions remains widely unknown. The ability to explain how ML models came to a certain conclusion would encourage more organizations to adopt AI.
The lack of transparency is becoming a greater issue for companies as governments around the world have started moving forward with AI regulations to protect the public from its risks. In April 2021, the European Commission proposed the first EU framework for regulating AI.
The European Parliament's goal is to ensure the transparency, fairness, explainability, and safety of AI systems used in the EU. Lawmakers are fully committed to regulating AI to provide safeguards to individuals, society, and the environment from the risks posed by AI. Now that lawmakers reached an agreement, the AI Act will be the world’s first comprehensive set of regulations governing AI.
A Risk-Based Approach
The regulation adopts a risk-based approach with different rules depending on the assessed level of risk for the AI system. The act establishes a framework for varying levels of risk which the Parliament broke down into four categories: unacceptable risk, high risk, AI with specific transparency obligations, and minimal or no risk.
Unacceptable Risk
The act states that AI systems classified under “Unacceptable Risk” are considered a threat to people and will be banned. Such systems include:
Cognitive behavioral manipulation of people or vulnerable groups: i.e. voice-activated toys that encourage dangerous behavior
Social scoring: classifying people based on socio-economic status, personal characteristics, or behavior.
Biometric identification and categorization of people
Real-time and remote biometric identification systems like facial recognition
Some exceptions may be allowed such as “post” remote biometric identification systems where identification occurs after a delay, and if approved by the court, can be used to prosecute serious crimes.
High Risk
Any AI system that harms safety or fundamental rights will be categorized as “High Risk.” The act separates them into two categories:
AI systems used in products that fall under the EU’s product safety legislation such as toys, aviation, cars, and medical devices.
AI systems that fall into seven specific areas registered in an EU database:
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management, and access to self-employment
Access to essential private services, public services, and benefits
Law enforcement
Migration, asylum, and border control management
Assistance in legal interpretation and application of the law
All AI systems that fall under high risk undergo an assessment before deployment to market. Additionally, these systems will continue to be evaluated throughout their lifecycle. Parliament expanded the high-risk category to include areas that could affect a person’s health, safety, fundamental rights, or environment. The act also added systems aimed to influence voters in political campaigns and recommender systems employed by social media platforms.
Generative AI
Generative AI systems, such as ChatGPT, will need to comply with transparency requirements. These include:
Stating the content was generated by AI
Designing the model to prevent the generation of illegal content
Publishing summaries of copyrighted data used for training to ensure transparency regarding the sources and types of data used
High-impact General-Purpose AI models that pose minimal risk must go through comprehensive evaluations and serious incidents must be reported to the European Commission.
Limited Risk
AI systems that qualify as limited risk need to comply with minimal transparency requirements that allow users to make informed decisions. Users can decide if they want to continue using the applications after their first interaction with it. These systems also have to inform users that they are interacting with AI.
Limited risk AI systems include those that generate or manipulate images, audio, or video content. For example, deepfakes would be considered limited risk systems.
Minimal Risk
All other AI systems, such as spam filters, fall into the minimal risk category. These systems can be developed in the EU without adhering to additional legal obligations. However, the AI Act encourages providers of minimal-risk systems to give users a disclaimer that they are engaging with AI.
Rules for General-Purpose AI Models
After several debates on the regulation of “foundation models,” or general-purpose AI (GPAI), the Parliament and Council reached a compromise with an amended tier approach.
The first tier requires all GPAI providers to comply with transparency requirements by sharing technical documentation. They must also follow the EU copyright law and provide detailed summaries about how the model was trained. If a GPAI model poses minimal risks, it won’t need to comply with transparency requirements during the R&D phase or if it is open-source.
The second tier is for GPAI models that pose systemic risk. These models will require developers to conduct model evaluations to assess and mitigate systemic risks. Providers must conduct adversarial testing and report any serious incidents to the Commission.
Pillars of Trustworthy AI
The EU AI Act also outlines the seven pillars of trustworthy AI based on fundamental rights and ethical principles. The list includes:
Human Agency and Oversight: AI systems should help people make choices and decisions, respecting their freedom. They should empower individuals, uphold their rights, and let humans oversee their actions.
Technical Robustness and Safety: AI systems must be developed to minimize risks and perform as intended to avoid unintentional and unexpected harm to individuals, society, and the environment.
Privacy and Data Governance: AI systems must respect privacy rights and comply with appropriate data governance measures.
Transparency: Providers of AI systems must make certain information available to users, including what data and algorithms were used to develop the model, how the model came to a decision, and a disclaimer that the user is interacting with AI.
Diversity, Non-discrimination, and Fairness: AI systems must enable inclusion and diversity throughout the entire AI lifecycle to promote diversity and fairness.
Societal and Environmental Well-being: AI systems must contribute to the well-being of individuals and the environment for present and future generations.
Accountability: Providers of AI systems must have the ability to explain a system’s decisions and be accountable for its outcomes to ensure responsibility throughout the AI lifecycle.
What Is the Future of AI in the EU?
While the legislation seemed to present a challenging balancing act between innovation and protecting society, members of the European Parliament have introduced exemptions to the regulations. The exemptions are for research activities and AI components distributed under open-source licenses to foster AI innovation.
The legislation also encourage the establishment of “regulatory sandboxes,” controlled environments facilitated by public authorities, where AI systems can be tested before deployment.
On December 9, 2023, the Parliament reached a provisional agreement with the Council on the AI Act. This is a significant piece of legislation that will fit into the ongoing development of an AI regulatory framework to ensure AI is used ethically, safely, and beneficially.
On March 13, 2024, the Parliament officially voted to pass the AI Act. Most of the provisions will only apply after a two-year grace period for compliance. The regulation’s prohibitions will already apply after six months and obligations for General-Purpose AI systems will come into effect after 12 months.
During the grace period, the Member States and Union will work to publish guidance on the implementation of the AI Act and establish effective oversight structures.
Prepare for the AI Act with Citrusˣ
The AI Act will not only shape the future of AI in the EU but also globally. With the help of Citrusˣ’s compliance and explainability tools, ensuring your model remains within the parameters set by the legislation is easy and streamlined.
With Citrusˣ you can be prepared for the AI Act. We provide reports that comply with specific regulations that are conclusive and verifiable. Regulations and guidelines for AI require the ability to demonstrate and explain decisions, verify that there are no biases or vulnerabilities in the models, and ensure their stability. Citrusˣ enables a thorough risk assessment and pinpoints problematic issues, to evaluate the true status of the model in these aspects.
Comments