Article by Malak Trabelsi Loeb
The European Parliament (EP) has taken a momentous step by approving its negotiating mandate, known as the “”EP’s Position,” for the European Union’s Proposal for a Regulation on Artificial Intelligence (AI Act) on June 14, 2023. This endorsement signifies the beginning of negotiations within European Union (EU) institutions and marks a significant milestone in shaping the future of Artificial Intelligence (AI) regulation. The final version of the AI Act is expected to be released by the end of 2023, solidifying the EU’s comprehensive framework for governing artificial intelligence.
The European Parliament’s (EP) approval of the “EP’s Position” marks a crucial milestone in shaping the proposed Regulation on Artificial Intelligence (AI Act) within the European Union. The EP’s Position introduces several notable amendments to the original Commission text, providing valuable insights into the changes put forth by the EP. Let us delve into some of the core amendments put forward by the EP (not all amendments).
1. Key Definition Amendments
Under the EP’s Position, the definitions within Article 3 of the AI Act have undergone substantial revisions, reflecting significant changes proposed by the European Parliament. These revisions encompass the following key points:
- The definition of an “AI system” now aligns with the OECD’s definition. It characterizes an AI system as a machine-based system designed to operate autonomously, with the ability to generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments. Explicit or implicit objectives drive these outputs.
- The term “users” has been replaced with “deployers” to refer to individuals or entities utilizing AI systems.
- The term “operator” encompasses various roles involved in the AI system, including the provider, deployer, authorized representative, importer, and distributor.
- The EP’s Position takes a broader approach to defining a “serious incident.” It now includes incidents or malfunctions of an AI system that directly or indirectly lead, might have led, or might lead to a range of outcomes. These outcomes include the death or serious harm to a person’s health, severe disruptions to the management and operation of critical infrastructure, breaches of fundamental rights protected under Union law, or significant damage to property or the environment.
By revising and expanding these definitions, the EP’s Position aims to provide a more comprehensive framework for regulating AI systems within the AI Act.
2. New Introduced Key Definitions
Within the EP’s Position, several supplementary definitions are presented, expanding the terminology used in the AI Act. These include the following:
- The term “affected persons” pertains to individuals or groups who are subjected to or affected by an AI system, recognizing their involvement and impact within the regulatory framework.
- The term “general purpose AI system” denotes an AI system capable of versatile applications beyond its originally intended design, showcasing its ability to be utilized across diverse scenarios.
- The concept of a “foundation model” is introduced, referring to an AI model trained on extensive and diverse data sets, enabling it to generate adaptable and versatile outputs across a range of tasks.
By incorporating these additional definitions, the EP’s Position aims to enhance clarity and precision in understanding the different facets of AI systems covered by the AI Act, ensuring a comprehensive approach to their regulation and implementation.
3. Categorization of AI Systems Amendments
The EP has enacted considerable modifications to the catalogue of forbidden AI systems, as delineated in the AI Act, including noteworthy addition aimed at promoting ethical AI applications and responsible use. These introductions encompass the following:
- Biometric Categorization Systems: The stance of the EP prohibits systems that categorize individuals based on sensitive or safeguarded attributes, features, or caracteristics based on their biometric or biometric-related data, or data that can be extrapolated from such information. Such a stance underscores the importance of preserving privacy and forestalling discrimination premised on personal attributes.
- Facial Recognition Databases: The EP’s position responds to apprehensions about facial recognition technology, barring AI systems from establishing or augmenting databases for facial recognition through arbitrary extraction of facial images from online sources or CCTV recordings. This step is designed to protect privacy and curb the unauthorized utilization of personal data.
4. General Principles Governing AI Systems
The EP’s Position lays down six fundamental principles, under Article 4, that apply to all AI systems governed by the AI Act. These principles encompass a broad range of aspects to ensure responsible and ethical AI implementation. The core principles introduced by the EP are as follows:
- Human Agency and Oversight: Emphasizing the importance of human control and decision-making in AI systems, ensuring that humans retain ultimate authority and responsibility.
- Technical Robustness and Safety: Promoting the development and deployment of AI systems that are secure, reliable, and resilient against errors, biases, and risks.
- Privacy and Data Governance: Safeguarding individuals’ privacy rights and establishing robust data governance measures to ensure the lawful and ethical use of data in AI systems.
- Transparency: Encouraging transparency in AI systems by promoting clear explanations of AI-generated outcomes, fostering trust, and enabling accountability.
- Diversity, Non-discrimination, and Fairness: Promoting AI systems that are fair, unbiased, and inclusive, and ensuring they do not discriminate based on characteristics such as race, gender, or disability.
- Social and Environmental Well-being: Encouraging the development and deployment of AI systems that contribute to societal well-being, sustainability, and environmental preservation.
These principles serve as a comprehensive framework to guide the regulation and implementation of AI systems, addressing key considerations to foster trust, accountability, and the responsible use of AI technology in various contexts.
5. Amendments of the Obligations associated with High-Risk AI Systems:
The EP broadens the scope of AI systems and applications classified as high-risk, which now encompasses certain AI systems employed by major social media platforms to provide content recommendations to users.
Per the EP, providers of specific AI systems have the opportunity to challenge the presumption of their system being classified as high-risk. To initiate this process, they must submit a notification to a supervisory authority or the AI Office (in the case of systems intended for use in multiple Member States). The designated authority will review the notification and respond within three months, determining whether the AI system should indeed be considered high risk.
Significant adjustments are introduced by the EP regarding the obligations imposed on providers of high-risk AI systems. These adjustments include:
- Ensuring Awareness of Risks: It is mandated that individuals responsible for overseeing high-risk AI systems have a clear understanding of the risks associated with automation or confirmation bias. This requirement emphasizes the importance of human oversight in mitigating potential risks and biases.
- Specifications for Input Data: Providers are now obligated to provide specifications for input data and relevant information about the datasets used. It includes considering the AI system’s limitations, assumptions, and potential misuse, ensuring transparency and accountability in its operations.
- Compliance with Accessibility Requirements: High-risk AI systems must comply with accessibility requirements to ensure equal access and usability for individuals with disabilities or specific needs.
Additionally, the EP expands the obligations of deployers who utilize high-risk AI systems. These expanded obligations encompass the following:
- Informing Individuals and Right to Explanation: Deployers are required to inform individuals subjected to high-risk AI systems about their exposure to such systems. Individuals also have the right to seek explanations regarding the outputs generated by the system, promoting transparency and accountability.
- Consulting Workers’ Representatives: Deployers must consult workers’ representatives and inform employees about implementing high-risk AI systems in the workplace before their use. This aims to foster dialogue and address any potential concerns or impacts on employees.
- Fundamental Rights Impact Assessment (FRIA): The EP mandates the conduct of a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems. This assessment involves evaluating the potential impact on fundamental rights, identifying risks to marginalized individuals or vulnerable groups, and developing plans to mitigate any identified harms. It also entails establishing a governance system incorporating human oversight, complaint handling, and redress mechanisms.
These adjustments introduced by the EP strengthen the obligations imposed on providers and deployers of high-risk AI systems, promoting responsible and ethical practices while safeguarding fundamental rights and ensuring transparency throughout the AI deployment process.
Certain deployers must conduct a Fundamental Rights Impact Assessment before implementing a high-risk AI system. This assessment must cover the following elements at a minimum, as per the EP’s introduced obligations under Article 29a:
– A clear outline of the intended purpose, geographic scope, and timeframe of system use.
– Identification of categories of individuals or groups likely to be affected by the system.
– Verification of compliance with relevant Union and national laws on fundamental rights.
– Evaluation of the foreseeable impact on fundamental rights resulting from the high-risk AI system.
– Identification of specific risks to marginalized individuals or vulnerable groups.
– Assessment of the potential adverse impact on the environment.
– A detailed plan to mitigate identified harms and negative impacts on fundamental rights.
– Establishment of a governance system incorporating human oversight, complaint handling, and redress mechanisms.
During the Fundamental Rights Impact Assessment preparation, deployers may need to engage with supervisory authorities, consumer protection agencies, and data protection agencies, among other external stakeholders.
Furthermore, the EP has introduced the obligations of the provider of a foundation model under Article 28b, establishing certain obligations for providers of foundational AI models before they launch these systems into the market or service.
- First, providers must ensure that these models meet a set of requirements. These requirements apply no matter how the model is delivered, be it standalone, embedded in another AI system or product, offered under open-source licenses, or as a service.
- Providers must illustrate their efforts toward identifying, minimizing, and mitigating any potential risks to health, safety, fundamental rights, the environment, democracy, and the rule of law. They should do this through careful design, testing, and analysis and may involve independent experts. Any risks that cannot be mitigated must be documented.
- Providers should only use datasets that have undergone proper data governance measures to scrutinize any potential biases and assess the appropriateness of data sources.
- The AI model must be designed and developed to meet acceptable levels of performance, predictability, interpretability, corrigibility, safety, and cybersecurity throughout its lifecycle.
- The AI model must adhere to applicable standards to minimize energy and resource usage and waste while enhancing energy efficiency.
- Detailed technical documentation and clear instructions must be provided by the provider to enable downstream providers to comply with their obligations.
- A quality management system must be established by the provider to ensure and document compliance with this article.
- The AI model should be registered in the EU database according to specific instructions.
- Providers must keep the technical documentation for 10 years after the model has been launched.
- For providers of generative AI systems (those used to autonomously generate complex content), they need to comply with additional transparency requirements, provide adequate safeguards against the generation of unlawful content, and make a detailed summary of the use of copyrighted training data publicly available.
The EP position sets a comprehensive framework for AI providers, establishing standards for responsible, transparent, and safe use of foundational AI models. These obligations act as significant checks and balances in the AI landscape, providing a clear pathway for the responsible evolution of AI technologies.
6. Obligations Associated with Generative AI:
The EP’s Position imposes specific obligations on generative AI systems to ensure responsible and lawful use. These obligations encompass the following:
- Disclosure of AI-Generated Content: Generative AI systems are required to disclose when content is generated by AI, enabling transparency and informing users about the origin of the information they encounter.
- Prevention of Illegal Content: Generative AI systems must be designed and programmed to prevent generating illegal content. This helps maintain legality and ethical standards in AI-generated outputs.
- Publication of Summaries of Copyrighted Data: Providers of generative AI systems must publish summaries of copyrighted data used in the training process. This promotes transparency and encourages accountability in the use of copyrighted materials.
By incorporating these provisions, the EP’s Position addresses crucial aspects related to prohibited AI practices and the responsible use of generative AI systems, ensuring compliance with legal, ethical, and privacy considerations.
7. Exclusion of Unfair Contractual Terms in AI Contracts with SMEs or Startups
The EP’s Position includes a provision designed to protect SMEs or startups from unfair contractual terms imposed by contracting parties providing tools, services, components, or processes integrated into high-risk AI systems. Prohibited provisions encompass clauses that:
- Exemption or Limitation of Liability: Clauses that exempt or limit the liability of one party by imposing terms for gross negligence or intentional acts are prohibited. This ensures that parties cannot evade responsibility for their actions or intentional wrongdoing.
- Exclusion of Remedies: Provisions excluding remedies in non-performance or breach of contractual obligations are not permitted. This guarantees that parties have access to appropriate remedies when contractual obligations are not fulfilled or breached.
- Granting Exclusive Authority: Clauses granting exclusive authority to determine conformity with the contract or interpret its terms to the party imposing the term are also prohibited. This prevents a single party from having unilateral control over the interpretation and application of the contract, ensuring fairness and balanced decision-making.
These measures are implemented to promote fair and equitable contractual relationships between SMEs or startups and contracting parties involved in high-risk AI systems. By prohibiting these unfair provisions, the EP’s Position aims to safeguard the interests and rights of smaller entities, fostering a more balanced and transparent business environment.
8. Measures to Support Innovation
The AI Act emphasizes fostering innovation and includes provisions for AI regulatory sandboxes, experiences an expansion and clarification under the EP’s Position. A noteworthy element is a requirement for EU Member States to actively promote research and development of AI solutions that generate positive social and environmental impacts. This entails initiatives such as improving accessibility for individuals with disabilities, tackling socio-economic disparities, and advancing sustainability and environmental objectives. The EP’s Position reinforces the commitment to harness the potential of AI in a responsible and beneficial manner, aligning technological progress with societal well-being and sustainable development goals.
9. Fines
The EP’s Position introduces significant amendments to the fines imposed under the AI Act. According to the EP’s proposals:
- Non-compliance with the rules on prohibited AI practices:
The EP’s Position introduces significant changes to the fines applicable under the AI Act. As per the EP’s proposals:
– Violations of the prohibited AI practices referred to in article 5 would result in administrative fines of up to €40,000,000 or up to 7% of the total worldwide annual turnover of the offender’s preceding financial yearif the offer is a company, whichever amount is higher.
– Breaches of the rules outlined in Article 10 (data and data governance) and Article 13 (transparency and provision of information to users) would lead to administrative fines up to €20,000,000 or up to 4% of the total worldwide annual turnover of the offender’s preceding financial year, if the offender is a company, whichever amount is higher.
– AI system or foundation model Non-compliance with the obligations or requirements under the AI Act. Other than those those laid down in Articles 5, 10 and 13 are subject to administrative fines of up to €10,000,000 or up to 2% of the total worldwide annual turnover for the offender’s preceding financial year, whichever amount is higher.
– Additionally, providing incorrect, incomplete, or misleading information to notified bodies and competent national authorities in response to a request may result in administrative fines of up to €5,000,000 or up to 1% of the total worldwide annual turnover for the offender’s preceding financial year, whichever amount is higher.
It is worth noting that the EP’s Position emphasizes that penalties, including fines, under the AI Act, and the associated litigation costs and indemnification claims, cannot be subject to contractual clauses or other forms of burden-sharing agreements between providers, distributors, importers, deployers, or any other third parties.
10. Reinforced Remedies
The AI Act has introduced a novel chapter focusing on providing remedies to individuals affected by potential breaches. A notable provision grants these individuals the right to file complaints with supervisory authorities, drawing parallels to the GDPR framework.
11. Roles and Responsibilities of the AI Office
The EP introduced under Article 56b a comprehensive and multi-faceted role for the AI Office, assigning it critical responsibilities that broadly fall under advisory, monitoring, enforcement, capacity building, and public awareness tasks.
- Advisory Role: As an advisor, the AI Office aids various stakeholders, including Member States and national supervisory authorities, in understanding and implementing the regulation. This indicates the importance of a centralized knowledge hub that can guide different entities and ensure consistent interpretation of the regulation.
- Monitoring and Enforcement: The regulation stresses the need for consistent application of AI regulations across the EU, minimizing the risk of divergent interpretation and implementation. Joint investigations signal the seriousness of enforcement and the cooperative approach taken by the office.
- Mediation: The AI Office acts as a conflict resolution entity. Disagreements about applying the regulation could potentially disrupt the consistent application of the AI Act; hence a mediator’s role becomes crucial.
- International Collaboration: It indicates the global nature of AI technologies and the need to harmonize standards and practices with international entities.
- Capacity Building: This reflects the recognition that the regulation’s implementation is not just about rules but also about building the expertise and knowledge base necessary for competent enforcement and innovation-friendly regulation.
- Guidance and Recommendations: This task acknowledges the fast-evolving nature of AI technology and the need for the regulation to adapt and provide guidance in line with technological advancements and emerging trends.
- Annual Reporting: This provides a mechanism for regular evaluation and public transparency, ensuring accountability in the system.
- Regulatory Sandbox Assistance: This points towards fostering innovation while ensuring compliance and providing a safe space for experimental learning.
- Public Awareness: Promoting AI literacy and awareness is critical in ensuring public trust in AI systems and their governance and in enabling the exercise of individual rights.
- Oversight of Foundation Models: It demonstrates the awareness of the potential challenges posed by complex AI models, requiring specific oversight and guidance.
The AI Office’s roles and responsibilities are designed to strike a balance between promoting AI innovations and safeguarding the public interest. The tasks suggest an adaptable, collaborative, transparent, and comprehensive approach to AI governance.
To conclude, the European Parliament’s Position on the EU AI Act introduces key revisions, including expanded definitions, prohibitions on biometric categorization systems and facial recognition databases, general principles for AI systems, enhanced obligations for high-risk AI systems, protection against unfair contractual terms, support for innovation, and fines for non-compliance. These amendments aim to establish a comprehensive framework for responsible and ethical AI governance in the European Union.In an upcoming exploration, we’ll immerse ourselves into the EU AI Act, specifically focusing on its scope of application. We aim to untangle the intricacies of where and how this groundbreaking legislation applies an area ripe for intense debates and spirited discussions among stakeholders. Stay tuned for this deep-dive analysis in our forthcoming issue, as we elucidate the boundaries of this pivotal regulation in the ever-evolving landscape of Artificial Intelligence.