This article was published 1 yearago

In a gruelling negotiation that stretched for hours in Brussels, European Union (EU) representatives have faced the Herculean task of hammering out a comprehensive agreement on the much-needed regulation of artificial intelligence (AI). This decision, encapsulated in the newly-agreed upon AI Act, marks a defining moment in the global trajectory of responsible and ethical AI governance.

The AI Act is slated to be a legislative milestone meticulously crafted to provide a strong regulatory framework for AI technologies. This legislation is set to usher in a new era of responsible and ethical AI development, marking a departure from the laissez-faire approach that has characterized the AI landscape. The significance of this initiative cannot be overstated, especially considering the warnings from influential figures like Elon Musk and Sam Altman regarding the existential threats posed by unregulated AI.

“In 38 hours of negotiations over three days we were able to prevent massive overregulation of AI innovation and safeguard rule of law principles in the use of AI in law enforcement,” Svenja Hahn, German MEP and shadow rapporteur for the European AI Act, spoke on the matter. “We succeeded in preventing biometric mass surveillance. Despite an uphill battle over several days of negotiations, it was not possible to achieve a complete ban on real-time biometric identification against the massive headwind from the member states.”

The AI Act also strives to put guardrails for generative AI. Developers of general-purpose AI systems, including powerful models like OpenAI’s GPT-4, are required to adhere to basic transparency requirements. This includes maintaining an acceptable-use policy, updating information on model training, and reporting detailed data summaries. Models deemed to pose a “systemic risk” would face additional, more stringent regulations. Furthermore, the policymakers have proposed the imposition of harsh penalties on companies found violating the regulations, with fines potentially reaching up to €35 million or 7% of global turnover. The AI Act proposals will be voted on sometime next year, and it is likely that the legislation itself will take effect by 2025.

The AI Act also extends its purview beyond generative AI models. Rather than focusing solely on models like OpenAI’s ChatGPT and Google’s Bard, the legislation also permeates critical domains such as law enforcement, surveillance, and infrastructure (for example, the use of AI-based tools and services by governments in biometric surveillance). Live scanning of faces will be allowed as long as there are safeguards and exemptions, while biometric scanning that categorizes people by sensitive characteristics like political or religious beliefs will be prohibited. Formal approval for the Act is still pending from both the European Parliament and the EU’s 27 member states, though, and the legislative journey stands at a critical juncture, with decisions in the upcoming stages likely to shape the trajectory of AI governance both in the EU and the global arena.

The nucleus of the debate revolved around the intricate challenge of harmonizing technological advancement with ethical considerations. European negotiators, spearheaded by Thierry Breton, the EU’s internal market chief, engaged in exhaustive discussions. Beyond its immediate implications for the EU, the AI Act sets a benchmark for responsible governance on a global scale. As nations grapple with the challenges of regulating rapidly evolving AI technologies, the EU’s proactive stance positions it as a trailblazer, influencing the international discourse on ethical AI development.

“We spent a lot of time on finding the right balance,” Breton said in a statement, while Jean-Noel Barrot, digital minister of France, noted that the French government will review the compromise in the coming weeks to ensure it “preserves Europe’s capacity to develop its own artificial intelligence technologies.”