The EU AI Act Is About to Hit the Books: Compliance Steps You Need to Know

The EU AI Act has ushered in a new era for AI governance. After three years of deliberations on how to regulate AI to safeguard citizens, businesses, and government agencies from potential risks, the Act is about to officially become law – setting a new standard for AI policy globally.

 

IBM welcomed the Act and its risk-based approach to regulating AI. It aligns with our work on AI ethics, which shows that openness, transparency, and accountability are the hallmarks of best-practice AI deployment.

 

While the Act will soon be published in the Official Journal of the European Union and become law 20 days later, it will take up to three years for all aspects of the legislation to come into full effect. During this time, policymakers and businesses have a collective responsibility to make the implementation of the Act a success. That starts with ensuring compliance, encouraging AI adoption, and ultimately spurring innovation across Europe.

 

Getting regulation ready

 

The main goal of the Act is to make AI development and use safer and more transparent. By providing guidelines and guardrails for AI developers and deployers, the Act intends to bring more trust and certainty to the use of AI technologies in Europe. This clarity will facilitate compliance and help organizations make more informed decisions about their AI investments and strategies. While the Act includes a phased transition and implementation period, IBM advises all clients to take AI governance seriously and prepare for compliance today.

 

Understanding the Act’s risk-based approach is key. The Act categorizes AI systems into four tiers based on the level of risk their use poses, including “unacceptable,” high,” “limited,” and “minimal” risk applications. AI practices that pose an unacceptable risk to society – such as using deceptive or manipulative techniques and social scoring – are outright banned. High-risk use cases require more regulation to mitigate issues like security and bias across all sectors of the economy, from critical infrastructure management to employment. Notably, generative AI is not classified as high risk, although certain usage requirements must be met.

 

Essential steps for compliance

 

To achieve compliance, organizations must undertake three critical steps:

 

  • First, organizations need to conduct a comprehensive inventory of their AI applications. This ensures a clear understanding of existing AI usage within an organization.
  • Second, businesses should perform a risk assessment to determine their obligation levels and ensure compliance with essential requirements such as human oversight, privacy, and accountability.
  • Third, adhering to the Act’s technical standards will be another crucial step to demonstrate compliance. European standardization organizations are currently developing these standards, with more details to become available in the coming months.

 

Reaping the rewards of responsible AI

 

Compliance will undoubtedly require an initial increase in investment. However, companies that put in the effort to make sure that their AI solutions are governed responsibly will be able to adapt quickly to evolving regulations while simultaneously building more trustworthy AI, and gaining a competitive edge.

 

In parallel with compliance efforts, organizations should focus on strengthening their AI governance strategies. This involves establishing cross-company workflow management tools and building automated governance workflows to ensure alignment and transparency across departments. This is precisely why IBM released watsonx.governance. This one-stop-shop platform combines trustworthy, pre-trained AI models with sophisticated governance controls to help companies innovate with confidence in their compliance.

 

Finally, establishing an AI ethics board and defining ethical guidelines for AI usage are essential steps. This ensures that ethical considerations are integrated into AI development and deployment processes, fostering trust, and mitigating reputational risks.

 

Monitoring the evolving AI policy landscape

 

This is not the end of the road for the EU AI Act. Companies, governments, and other organizations whose activities are in the scope of Europe’s AI rulebook will need to pay close attention to upcoming developments in the months ahead. For instance, the EU is expected to publish codes of conduct on transparency obligations and on general purpose AI models, provide templates for fundamental rights risk assessments, publish information on training data for foundation models, offer more guidance on the definition of high-risk AI, and establish governance bodies. Companies that keep up with the Act as it evolves will be well-positioned to ensure compliance and future-proof their companies for further innovation and regulation.

 

We’ve known for years that AI will touch all aspects of our lives. The EU AI Act is a significant step toward balancing those impacts with responsible AI governance. By prioritizing compliance and corporate accountability, organizations can capitalize on regulatory clarity, build trust and confidence in AI systems, and foster a culture of open, responsible innovation.

 

 

-Christina Montgomery, Chief Privacy and Trust Officer, IBM

 

 

 

 

 

 

 

-Jean-Marc Leclerc, Director of EU Affairs, IBM Government and Regulatory Affairs

 

 

 

 

 

 

Originally Appeared Here

Author: Rayne Chancer