5 minute read

                          

April saw publication of the European Commission’s proposals for an Artificial Intelligence Act to deliver a framework of legislation suitable for governing AI in any area of business or life.    A mammoth endeavour, and a mammoth that needs poise and agility.    The Commission looks to AI as a key to future economic growth which must not be stifled; but product quality standards, public trust, and investor confidence need consistent regulation across the Union.  

The Commission’s proposal is that AI legislation should cover software using machine learning or logic- or statistics-based approaches to deliver content, control, or decisions.   Of course, the dangers the legislation tries to guard against – poor product performance, unfair algorithms, over-reliance on poorly understood systems, using data to intrude into private lives – were problems long before AI emerged.   But legislation that addressed these risks wherever they arise, in a technology-neutral way, would have been even more complicated and burdensome.  So practicability means they have to settle for legislation that blocks bad or stupid systems that use AI, but not systems that cause the same problems using other techniques.  

The legislation – just like GDPR – is expected to have teeth.  The most serious transgressions can be punished with fines of 30 million Euros, or 6% of global turnover.   There are also measures to ensure that offshore AI systems can’t be used to get around the legislation – any organisation operating in the EU, or producing outputs used in the EU will be covered, regardless of where they are based. 

The most common AI software and AI-enabled products would, according to the Commission, be classed as minimal risk, with no legal requirements, but perhaps voluntary codes of conduct covering aspects such as environmental impacts.   At the other end of the spectrum, there are a few possible uses of AI – such as in social credit systems, or subliminal control – that are prohibited outright.    

High risk AI

The main focus or the law is on “high risk AI” – where the AI is either (a) a product or safety component of a product that is already regulated by law, such as medical devices, civil aviation, lifts etc., that already need independent checks on conformity with regulations before they can be placed on the market, or (b) a system that can affect individual rights (in public services, law enforcement, employment, biometric identification) or control critical infrastructures.

For all high-risk products and services, there are legal obligations on the organisation that takes responsibility for bringing it into use the in the EU  “providers” and also on the end users.    Providers will have to ensure a rigorous conformity assessment that addresses: life-cycle risk management; data use and selection for training and operation; accuracy, robustness and cybersecurity; transparency and information for users; technical documentation and record keeping; and human oversight.   None of these expectations should be a surprise for people working in AI for healthcare.  Other guidance for health AI – for example from the FDA in the USA – already mentions most of these.    

The creation of legal obligations on users of AI in the EU is more novel, but logical.  They will have to take responsibility in areas such as human oversight, monitoring and reporting serious incidents, and use of data.

Transparency obligation

Outside of high-risk AI, the Commission also proposes that a few low risk systems should have a new “transparency obligation”.   People who encounter synthetic or manipulated images, audio content should be  told that it is artificially generated, and when we encounter systems that use emotion recognition or biometric identification, we should also be notified and have a chance to opt out.

Keeping the regulation supportive of AI

The Commission is being careful to avoid creating too much new regulatory work, or creating bottlenecks or ambiguities in the path to new AI products.    AI-enabled products are to be handled within existing regulatory systems (for medical devices, transport etc) where possible.  Outside of these areas, for new stand-alone systems in areas like education and employment, the providers will be expected to assess themselves whether products conform to regulations – initially at least.  And where AI systems continue to learn in use, as long as the algorithm and performance develop only within a predetermined and documented framework, this will not be seen as a substantial modification needing pre-launch assessments.

Alongside these, the Commission also envisages national bodies setting up  “regulatory sandboxes” where AI innovations can be used in a controlled environment to gain better legal and technical confidence before getting confirmation that the product complies and can be launched on the open market.

The hard work starts here

So far so good.  But this framework is just the start, and the hard work for Europe starts now:

  • The quality criteria – such as transparency, freedom from bias, resilience, accuracy, and human oversight – are all important and well known concepts.  Now, sector by sector, new standards will have to be written with expert input, to define how good is good enough, which measurement methodologies are trusted, and how evaluation and performance must be documented.   AI developers will need regular updates on what the future regulations will be,
  • Definitions in the proposal are outlines only, and the explosive diversity of AI will make it hard to set enforceable definitions and keep them up to date.   When do (unregulated) personalised nudges become (prohibited) subliminal manipulation?  In medical software we already distinguish between systems that influence decisions a little, those that take decisions or influence them a lot, and those that simply organise information.  Similar distinctions will be needed across many other public services, with in/out dividing lines to define what is “high risk”. 
  • New expertise and new teams will be needed to apply the new standards.   Well-established national regulators covering medicine, vehicles and so on will cover AI-enabled products within their jurisdiction, but in areas like education and employment each member state will need to work out which body will lead in their country, for each area.    While many AI providers will welcome the expectation that they undertake their own assessments for CE marking before products are launched, some will need stronger regulatory affairs teams.  And the independent notified bodies for conformity assessments (such as the TUVs in Germany) will also need to upskill.     
  • Business and public service developers will need a steady flow of information about the future standards, how to meet them efficiently, and how to use the European standards to demonstrate the quality and trustworthiness of their products internationally.  The Commission, quite rightly, anticipates that AI legislation and standards will need frequent updates to respond to technological advances.  

Europe has several years to work through all of this.  The legislation is high-profile and complicated and could progress slowly through the system, and with a transition period after it comes into force, it could be 2025 before new legislation is fully in force.  

But when you consider the amount of work needed four years seems barely enough, and I predict that what we see in 2025 will be far from finished.    It will be interesting to see whether any countries outside the EU see the need for a similar piece of legislation – the need to act in unison makes it a sensible approach for the EU.  But elsewhere, sector-by-sector regulation may be a better way forward.

Declan Mulkeen