The Defence Case for AI Regulation

Article by Sandip Patel KC

The European Parliament’s adoption of the EU AI Act position starting from a foundation of shared values and consensus on core elements of responsible AI, marks the beginning for what will represent a welcome and seismic shift in the global AI policy landscape. By strength in numbers, the vote sets up the so-called trialogue negotiations, the final phase of the EU’s process, and paves the way for the likely adoption of Europe’s — and the world’s — first comprehensive AI regulatory framework.

The EU AI Act is expected to become law in early 2024 (with a hard deadline created by the Parliamentary elections in early June 2024). The Commission proposed 24 months for the subsequent period for compliance; the Council wants to extend this to 36 months; but the new sense of urgency around regulation may mean that it is shortened.

If the past is prologue, then there may be a sense of déjà vu when it comes to Europe playing a leading role in the development of AI policy. In 2016, Europe adopted privacy legislation – the General Data Protection Regulation, better known simply as GDPR – that has gone on to play a significant role in global privacy regulation achieving “Brussels effect” ripples. In 2019 Commission President von der Leyen said: “With the General Data Protection Regulation we set the pattern for the world. We have to do the same with artificial intelligence.”

Do we need AI regulation?

AI is embedded in our lives, and it is here to stay; a recent study by Pricewater-House Coopers found that AI could contribute US$15.7 trillion to the global economy in 2030, making it the most significant commercial opportunity in today’s economy. AI is all pervasive. Many organisations are now using it (or trying to). Increasingly advanced and meaningful decisions are now being delegated to AI systems, some of which are subject to regulation but many of which are not.

There are various codes of AI ethics but few binding laws as yet. The application of certain laws currently in force to AI remains untested and therefore unclear.

AI systems give rise to novel issues because current legal and moral systems are premised on human decision-making. The new problems include questions of:

• Who should be responsible if AI causes harm?

• Who should be the owner if AI creates valuable output, which might otherwise be protected by intellectual property laws or provisions on the freedom of speech?

• What parameters should AI consider when taking decisions that involve a trade-off between competing values?

• Are there any areas from which AI should be banned, or in which human intervention should be made mandatory?

The question of whether a regulation is the appropriate framework to regulate AI in Europe (or indeed elsewhere) is hotly debated. To offer a typical lawyer’s view, “it depends.” On one hand, and as in the case of the GDPR, by opting for a regulation which is directly applicable to all EU Member States, the Commission seeks to prevent the potential fragmentation of applicable legal frameworks across Europe with varying national legislative requirements. From a business perspective, there is a balance to be found between speed, innovation, and human[1]centric AI. There is a body of consensus that the alternative of letting market forces freely determine how AI will be used without any rules would be an unacceptable and dangerous risk.

Therefore, the real question is not whether we need regulation, but rather whether the right approach is followed. In the meantime, organisations, including governments, which seek to use AI operate under a degree of uncertainty as to how the technology should be managed. Such uncertainty is negative for businesses, who may hold off on investing until the regulatory picture is clearer. Sam Altman, the trailblazer behind ChatGPT and Worldcoin, appeared to fire a warning shot at Brussels, suggesting his company could pull its services from the EU if regulation was too tough. “We will try to comply but if we can’t comply, we will cease operating,” said Altman before later rowing back on his comments.

It is sometimes thought that regulation and innovation are opposed to each other. This is not correct. Instead, when regulation is designed well it can create a stable framework for innovation, promoting societal trust in new technologies and encouraging entrepreneurs to build their companies in a jurisdiction. The current situation is also damaging for the wider population, which may lose out on the advantages of AI, or alternatively may suffer harm through the unethical use of AI but lack any legal recourse or protection. It is therefore important for regional, national, and supranational governments to play a coordinating role in establishing clear and effective AI regulatory policies.

As regards AI-specific regulation, the EU’s AI Act is arguably the most detailed, developed, and wide-ranging proposal. The EU AI Act takes a tiered, risk-based approach. AI applications that pose an unacceptable risk of harm to human safety or to fundamental rights will be banned. High-risk (but not prohibited) AI will be subject to detailed regulation, registration, certification, and a formal enforcement regime. Lower-risk AI will mainly be subject to a transparency requirement. Other AI applications will be unregulated.

The UK government’s approach, as set out in its March 2023 white paper, is markedly different to the EU and wants to consolidate its role as the leading European AI hub with a US-aligned, regulation[1]light environment to boost productivity and attract innovative businesses to the UK, leaving existing regulators to regulate AI using their existing jurisdiction and powers.

But while there is global consensus on the need for AI regulation, there is no agreed pathway. In fact, there is obvious tension between the approach of the EU, and that of the UK, US, and others.

This ambiguity consigns organisations to a Catch-22 dilemma. A business deciding how to approach AI compliance and whether to begin aligning with emerging regulation, may therefore base its decision on its target markets for future expansion. The problem is particularly acute for UK businesses that want access to neighbouring EU markets which will depend on compliance with the EU AI Act and corresponding compliance costs.

Separately to mandatory compliance, many businesses are nonetheless choosing to undertake an “algorithmic impact assessment” to understand the potentially wide-ranging risks posed by AI and scope for mitigation.

Various assessment frameworks are available. The UK’s Centre for Data Ethics and Innovation has recently published case studies of real-world AI assurance techniques across various sectors to build skills and understanding in this area (including an EU AI Act readiness assessment tool from the British Standards Institute). The Commission has set up the European Centre for Algorithmic Transparency, while the European Law Institute has published model rules for algorithmic assessment in the public sector. The Netherlands already requires public authorities to audit algorithms’ impact on human rights. AI audits will increasingly become important as organisations seek to understand and manage AI risk and impending compliance obligations.

The one certainty is that the AI road ahead will be bumpy and exhilarating!