The EU AI Act (the “AI Act”) is the world’s first comprehensive AI law. The Act lays down a harmonised legal framework for the development, supply, and use of AI products and services in the EU.  

To whom does the AI Act apply? 

The legal framework will apply to all AI systems impacting people in the EU, regardless of where systems are developed or deployed. 

When will the AI Act take effect? 

The AI Act is currently expected to enter into force in Q2-Q3 2024, with different obligations then taking effect in stages. 

Understanding the  AI Act’s Objectives 

The draft AI Act seeks to achieve a set of specific objectives:  

  • Ensuring that AI systems placed on the EU market are safe and respect existing EU law; 
  • Ensuring legal certainty to facilitate investment and innovation in AI; 
  • Enhancing governance and effective enforcement of EU law on fundamental rights and safety requirements applicable to AI systems; and  
  • Facilitating the development of a single market for lawful, safe, and trustworthy AI applications and preventing market fragmentation.  

AI Act: different rules for different risk levels 

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed. 

 
1. Unacceptable risk 

Unacceptable risk AI systems are systems considered a threat to people and will be banned.  

They include: 

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children. 
  • Social scoring: classifying people based on behaviour, socio-economic status, or personal characteristics. 
  • Biometric identification and categorisation of people. 
  • Real-time and remote biometric identification systems, such as facial recognition. 

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval. 

2. High risk 

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories: 

1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts. 

2) AI systems falling into specific areas that will have to be registered in an EU database: 

  • Management and operation of critical infrastructure 
  • Education and vocational training 
  • Employment, worker management and access to self-employment 
  • Access to and enjoyment of essential private services and public services and benefits 
  • Law enforcement 
  • Migration, asylum and border control management 
  • Assistance in legal interpretation and application of the law. 

 
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. 

3. General purpose and generative AI 
Generative AI, like ChatGPT, would have to comply with transparency requirements: 

  • Disclosing that the content was generated by AI. 
  • Designing the model to prevent it from generating illegal content. 
  • Publishing summaries of copyrighted data used for training. 

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission. 

4. Limited risk 

Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes. 

Opportunities 

Ethical Leadership: Organisations that prioritise ethical AI practices and demonstrate a commitment to responsible innovation can enhance their reputation and build trust with consumers, employees, and regulators. By aligning with the principles of the AI Act, organisations can position themselves as leaders in ethical AI deployment. 

Innovation and Differentiation: The AI Act promotes regulatory sandboxes and real-world testing, providing opportunities for Organisations to innovate and develop AI solutions in a controlled environment. Companies that invest in compliance and develop AI systems that meet the  AI Act’s standards can differentiate themselves in the market and gain a competitive edge. 

Market Expansion: Compliance with the AI Act allows Organisations to access the European market with confidence, as they demonstrate adherence to regulatory requirements and respect for fundamental human rights. This opens opportunities for expansion and growth in a region that values ethical AI practices. 

Talent Acquisition: Companies that invest in talent acquisition and training to support AIA compliance with the AI Act can attract top-tier professionals with expertise in AI governance, ethics, and regulatory compliance. Building a skilled workforce capable of navigating the complexities of AI regulation is essential for long-term success. 

The AI Act represents a real opportunity for Organisations that are looking to leverage the power of AI. However, there are some threats that business leaders also need to consider. 

Threats: 

Compliance Costs: The AI Act imposes significant compliance costs on Organisations, including overhead expenses related to risk assessments, governance frameworks, and regulatory reporting. Companies that fail to allocate sufficient resources to the Act’s compliance may face financial strain and operational challenges. 

Fines and Penalties: Non-compliance with the AI Act can result in substantial fines ranging from €7.5 million to €35 million, or a percentage of global turnover. Organisations that neglect the AI Act’s requirements or underestimate the severity of regulatory violations risk facing severe financial penalties that could impact their bottom line and reputation. 

Operational Disruption: Implementing robust governance and oversight measures to ensure  compliance with the AI Act may require operational adjustments and process changes. Organisations that fail to adapt their operations to meet the AI Act’s standards may experience disruption and inefficiencies that hinder productivity and competitiveness. 

Reputational Damage: Violations of the AI Act’s ethical standards or failures to comply with regulatory requirements can lead to reputational damage and loss of consumer trust. Organisations that are perceived as prioritising profit over ethics or disregarding fundamental human rights may face backlash from stakeholders and damage to their brand reputation. 

Conclusion  

In conclusion, while the AI Act presents opportunities for Organisations to demonstrate ethical leadership, drive innovation, and access new markets, it also poses significant threats in terms of compliance costs, fines, operational disruption, and reputational damage. By proactively addressing these challenges and investing in compliance with the AI Act, Organisations can navigate the regulatory landscape successfully and leverage AI technologies responsibly for long-term growth and sustainability. 

For more comprehensive information on Calligo’s Data Ethics and Governance solutions, visit https://www.calligo.io

For more information on Calligo’s AI solutions, visit https://www.calligo.io