Back to top
Article

AI – the legal and ethical minefield

15 April 18

In association with Thomson Reuters: Tech has the power to do good, but can it harm as well?

Artificial intelligence and machine learning tools are already embedded in our lives, but how should businesses that use such technology manage the associated risks?

As artificial intelligence (AI) penetrates deeper into business operations and services, even supporting judicial decision-making, are we approaching a time when the greatest legal mind could be a machine? According to Professor Dame Wendy Hall, co-author of the report Growing the Artificial Intelligence Industry in the UK, we are just at the beginning of the AI journey and now is the time to set boundaries.

“All tech has the power to do harm as well as good,” Hall says. “So we have to look at regulating companies and deciding what they can and cannot do with the data now.”

AI and robotics professor Noel Sharkey highlights the “legal and moral implications of entrusting human decisions to algorithms that we cannot fully understand”. He explains that the narrow AI systems that businesses currently use (to draw inferences from large volumes of data) apply algorithms that learn from experience and feed back to real-time and historical data. But these systems are far from perfect.

Potential results include flawed outcomes or reasoning, but difficulties also arise from the lack of transparency. This supports Hall’s call for supervision and regulation. Businesses that use AI in their operations need to manage the ethical and legal risks, and the legal profession will have a major role to play in assessing and apportioning risk, responsibility and accountability.

AI also raises new issues around data ownership and intellectual property. “Different suppliers offer different positions on whether any learning that the AI platform takes from a dataset should be for the benefit of the owner of the dataset, or should be for the benefit of the wider user community,” says Emma Wright, technology partner at Kemp Little. “Companies need to consider whether they want to share and benefit from industry learning, or whether the efficiencies that the AI has created offer a market advantage that should remain proprietary.”

Cliff Fluet, partner at Lewis Silkin, also recognises the tensions around intellectual property ownership. “The supplier will want to retain the ability to use the knowhow and core engine that facilitates the machine learning solution again elsewhere, while the customer will want to own it,” he says.

The lack of transparency is another potential sticking point. “AI software does not allow a user to ‘look under the bonnet’ and allow independent verification of the inputs that have been entered in order to drive the output,” says Wright. “This limits the degree of reliance that companies can place on the output, particularly when used in a compliance or legal context.”

This “black box” syndrome is partly due to engineers not understanding exactly how the conclusions of some machine learning systems are reached, as well as their unwillingness to risk losing competitive advantage. “Algorithms are proprietary and valuable, so sharing with customers, where the technology may ultimately end up in the hands of competitors, remains unappetising,” says Wright.

The algorithm transparency dilemma is at the heart of a criminal case in Wisconsin, where an appeal has been launched against a judicial decision based on the inability to explain the workings of an algorithm.

However, in most business contexts, “An AI decision-maker is a tool, just like any other technology that we might implement to make our work easier,” says Ben Travers, head of IP and IT at Stephens Scown. “Organisations that use an AI decision-making tool will not be able to avoid liability for bad decisions.”

Robotics and ethics professor Alan Winfield agrees. “We need to be absolutely clear that AIs cannot be responsible for their decisions. There’s nothing mystical about AI systems that means that you can’t engineer them responsibly,” he says. “Even systems that learn can be designed to be provably safe – by building in hard limits beyond which the learning cannot take the system.”

These imposed limits aren’t yet required and the likelihood of an artificial intelligence replacing a human judge is slim. The critical analysis, abstract reasoning and judgment required in sentencing will not be replicable in AI for a very long time. Currently, the Sentence Range Calculator on Thomson Reuters Practical Law applies the Sentencing Council’s definitive guidelines, but it operates on the understanding that the final decision rests with a judge who has heard all the evidence and seen the witnesses in every case.

Beyond the legal profession, when it comes to managing the risk in business and professional services, supervision is key. “Who will make sure that the algorithms actually do what they say they will do?” asks Sonya Leydecker, consultant and former chief executive of Herbert Smith Freehills. “From a law firm perspective, you would need assurance that you could rely on a system governing day-to-day business to comply with health and safety and employment law.”

“One of the biggest considerations for businesses using AI is the data,” says Dave Coplin of The Envisioners, who is a former chief envisioning officer at Microsoft. “Not only do they have to think about whether they have the data they need to make the AI work, but more importantly, whether they have the right to use it in that way. Organisations need to develop a moral and ethical stance on how they use customer data, as they may be tempted to do things that offer a lot of value, but could have massive privacy implications.”

He adds that rather than attempting to control the algorithm, “We are going to have to get better at how we manage and control the data that the AI will use as fuel for the answer.”

Jason Alan Snyder, global chief technology officer of Momentum Worldwide, advises businesses not to lose sight of the human factor. “For the moment, anyway, humans are a primary source and destination for the information being created. And people give meaning to the machines. Each of us reading this is feeding data into the cloud. Data is replacing oil and gold as a primary commodity. But a business has got to put people first. ”

All of this comes down to getting our approach to data management right. Coplin highlights the importance – and the limitations – of data governance. “Current and planned regulation locks down data usage, which is obviously a good thing. But locking it down also prevents innovative use that could provide incredible value to consumers as well as businesses,” he says.

uk.practicallaw.com/scotslaw

Have your say