This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 2 minutes read

AI technology: a lawyers’ guide

As AI becomes more prevalent in our lives the law will need to adapt. However, lawmakers have been struggling to draft a definition that covers exactly what AI is. This may be partly due to AI itself being an imprecise concept, and partly due to the complex technical nature of the field. 

As a result, AI is often used as an umbrella term to cover a variety of underlying computing technologies. In this post we seek to examine these underlying technologies that are usually thought of as ‘AI’, and look at how the regulators have so far tried to capture ‘AI’ in words.

What is AI technology?

The problem of producing a legal definition of AI is perhaps unsurprising, given that even AI experts have differing views on what the technology is. In fact, the term ‘AI’ is often used to describe a basket of different computing methods, which are used in combination to produce a result but which aren’t necessarily AI by themselves. Five methods that are integral to current AI systems are listed below. Click on the links below for an explanation (appearing on Freshfields' AI hub).

How can AI be legally defined?

The need to regulate AI is clear. Citizens need to know who will be liable if a driverless car knocks them down; and businesses need to know who owns the IP in products designed by their in-house robots. But to regulate AI we must first define it. Even trickier: that definition must be future-proofed, so as to cover any changes in AI technology. The attempts so far have been mixed.

In the UK, the House of Lords’ Select Committee on AI recently released a report that used this definition:

‘Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.’

This is a problematic definition because it tries to define AI by reference to human intelligence, which is itself notoriously hard to define. Also, this definition omits a key feature of many of AI’s most useful advances: applying the huge processing power of computers to achieve tasks that humans can’t.

Meanwhile, the EU Commission has suggested this definition of AI:

‘[S]ystems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.’

And in the US, the Future of AI Act – which sets up a federal advisory committee on AI – defines AI as:

‘Any artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance… In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.’

The EU and US definitions have the same problem of defining AI by reference to human intelligence. The EU Commission’s wording introduces the concept of ‘autonomy’, which might be a useful approach for future legislation. 

For now, we’re still some way off an agreed legal definition, and the better approach is probably to look at the context in which the law might intervene. For example, if we ask how AI should be regulated, our terminology will need to take into account the impact of the AI and the respective responsibilities of those who introduced it into the world. In particular, we can expect regulators to look beyond autonomy to its creators. For now, it at least feels like the EU has the right mindset, though these legislative debates would probably have made Alan Turing smile – as he put it: ‘We can only see a short distance ahead, but we can see plenty there that needs to be done.’

For more insights into AI in a legal context, visit our AI hub.

Tags

ai, eu digital strategy, eu ai act