Artificial intelligence and the law - The legal liability of software vendors in the 21st century.

Autonomous systems represent a challenge for the existing models of liability, which is largely based on causative reasoning. In other words, party A is liable to party B if the injury to B is caused by an action by party A. Whereas such reasoning system makes perfect sense in the pre-artificial intelligence world, it becomes problematic in a post AI world, where due to the innate complexity of self-learning systems, it is difficult to determine attribution of fault for liability purposes. This paper examines some of the legal issues arising from this new technology.
What are autonomous systems

To make any sort of assertion as to what liability might be incurred by autonomous systems, we must have a common definition of what such a system is. Machinery is not new, and injury and damage caused by machinery has been a fact of life for some time now, but until now, the operator of the machinery has been responsible for operating it safely, for its intended purpose, and if he failed to do so, and an innocent bystander were injured, then he would be liable for damages, as long as it was reasonably foreseeable that such injury might have occurred.  When operating a car, it is the driver’s responsibility to drive carefully, and the driver will have full control of the car enabling him to do so. In other words, the driver decides when and how hard to press the brake; and the car responds accordingly.

Carrying on the car analogy, in modern cars, the action of the driver is enhanced using computer technology, to ensure that brakes are applied in the most effective way possible. These systems (such as Anti-Lock braking systems and stability control systems) consume input from a number of sensors, and apply an algorithm to the data to make the correct response to keep the vehicle safe. Were such a system to malfunction, the malfunction can be traced to the system input causing it. In other words, it is a deterministic system – Given the same input, you will always get the same response.

Autonomous systems differ from these systems we have discussed – The differentiating factor is the ability to evolve decision-making ability based on past input. These present a problem for us as lawyers, because if a loss is encountered as a result of a decision by such a system, it is less obvious where the fault lies. The challenge lies in the determination – because the system is self-learning, a decision made today, may be different from a decision made a year from now, because the system has evolved its intelligence, and is making decisions based on a broader set of experience. Testing such a system during manufacture cannot determine what decisions a system will make in the future with absolute certainty, and therefore it is possible the system will evolve to make decisions which may not be predictable, and in some cases not even desirable.  From an absolute liability perspective, this is problematic, and potentially even more so from a product liability perspective, as in some cases, there is no defect at all causing the injury.  Let’s consider the perspective of an autonomous car; whose Artificial Intelligence engine’s primary parameter is to protect the passengers of the vehicle; and as a secondary parameter protecting other road users. In the event that the vehicle is put in a situation where a child runs out in the road; and the vehicle needs to make the choice of avoiding the child but as a consequence the vehicle will hit a rock ensuring certain death for the passengers; or not avoiding the child and consequently killing the child. In all likelihood, the AI would choose to kill the child. (As would, in most cases a human driver). Would the AI behave differently if there were no passengers in the vehicle however? (A scenario which would never even be a consideration with a human-controlled vehicle, but which is arguably a potential for an automated vehicle). It is difficult to say for sure – Certainly with today’s vehicle technology it is unlikely, but even in the future, there comes a time when a machine would make different choices than a human driver, based on ethical and moral considerations – considerations which, one might argue, is quintessentially human.

IA Liability – Machines vs Sentient Beings

Although technology has moved a long way over the last 20 years, we are still a long way from genuine neural networks leading to self-awareness. The machine learning algorithms and artificial intelligence systems currently trailblazing our society is essentially advanced pattern-matching machines, and the decision control circuitry which ultimately leads to the decision-making is still deterministic. Therefore, considerations such as moral and ethical dilemmas do not require the legislative scrutiny one might expect just yet; but the day will undoubtedly arrive.

What is Liability

Liability is essentially the legal responsibility society puts on a person based on causal links to our actions. In other words, it is the accountability society puts upon us for our action. Liability is not a static concept – Our level of liability depends on a number of considerations – Able-minded adults have a far higher degree of liability for their actions than children and the mentally disabled, whom may have no liability whatsoever. Liability in tort is also dependent on whether there is a duty of care, and whether any injury is reasonably foreseeable[1] . Until recently, there was never a question of whether a machine could be liable – A machine was merely a tool, and the user of the tool would ultimately bear the legal responsibility of any damage caused.  This is how the law has, until now, dealt with machine-generated consequences.

Existing Liability frameworks

If we were to consider the legal consequence of a ‘machine-generated loss’, we stand to look at contract law, negligence in tort, and strict liability under the Consumer Protection Act 1987.

For the purposes of examining each of these, let us carry on our Autonomous car example. We can see that such a vehicle is an assembly of integrated systems from a variety of manufacturers. There is motion sensors, LIDAR sensors, GPS, accelerometers, radar and lasers. There is a central computer core which consumes data from all these sensors and applies an algorithm to the incoming data to make sense of it. The computer would be running an operating system, be connected to remote systems via GPRS/3G/4G and other mobile data technologies, and running a decision-making software often written by a different company than the car manufacturer themselves.

All these systems give us any number of liability targets, starting with the owner of the vehicle, the car manufacturer and ultimately down to the manufacturer and designer of individual components within the vehicle. The existing causative liability models can be applied when the fault causing the injury or loss can be traced back to a human failure (though either design, programming, foresight or knowledge). However, if we were to have a scenario where such trace-back is difficult or impossible to do, it will be substantially less clear how liability will be applied.

Negligence

Product liability in tort arises when there is a duty of care; that duty is breached, and as a result of the breach, the person to whom the duty is owed suffers a loss or injury. The very essence of negligence was established in Donoghue v Stevenson[2], and related to the foreseeability that a consumer might be injured from a rotting snail left in a ginger beer bottle. It is not difficult however, to extrapolate such principal to that of an autonomous car, a robotic surgeon, or an intelligent automated trading system.

Sources of liability

An autonomous system contains technology not commonly found in any other system. Although for all intents and purposes, these systems are probably ‘safer’ than the alternative human operators, it does introduce a series of new sources of liability:

·       A software defect

These can broadly be broken into a series of subcategories:

o   A logic error – Where the software does not do what the programmer intended it to do

o   An implementation error – Where the software doe what the programmer intended, but not what the specification required

o   An edge case – Where the software fails to address a particular set of circumstances encountered, and as a result the response is inappropriate

·       A deliberate choice by the software (for example where an autonomous vehicle chooses to crash into another vehicle in order to avoid a pedestrian)

·       A defect in sensor technology used by the autonomous system

·       Or a fault in the handover of control between the autonomous system and the driver (in the case of where there is also a human driver – Tesla’s driver assist comes to mind here.)

Proving causation is essential in negligence, and doing so in a product liability case can be very difficult. If a third-party were injured by the autonomous vehicle, the injured party would have to prove on the balance of probabilities that not only did a duty of care exist (this would not be very difficult in the case of the autonomous car, although it could be significantly harder in other types of automated systems), but that the injury suffered was reasonably foreseeable. In the event of a software defect, if the manufacturer could prove that the software had been tested thoroughly and no defect was found, then the first tenet of the Caparo test[3] cannot be said to have been met, and therefore the manufacturer may not be found to be liable.  

In negligence, when considering whether a person has acted negligently, the test applied is the objective test, the man on the Clapham Omnibus. In other words, where a duty of care exists, the claimant must prove that the duty was breached and that the injury was caused by the breach of duty. When applying such a reasonableness test to a machine, it is difficult to come to any other conclusion than that the machine would be acting reasonably in the circumstances, on the basis that reason is the very essence of software. An autonomous system will apply logical reasoning as a matter of cause, and act accordingly. One could therefore conclude that any accident caused by an autonomous vehicle would be just that – an accident – in the absence of evidence that the machine’s logic was either defective or malicious. So does that go to say that as long as the machine does not manifest a defect, then any accident gives rise to no claim in negligence? The answer to that question may well turn out to be affirmative.

Strict Liability

The Consumer Protection Act 1987 implements the EU directive on Liability for Defective Products.[4]  The Act introduces strict liability for defective products and allows a person who is injured to claim against the manufacturer of that product, if the product can be proven to be defective. Note that there is no requirement to prove fault under the act – Merely proving that the defect exists is sufficient. This goes some way to address the issue of reasonable foreseeability, but it leaves the claimant with the burden to prove that there was a defect in the first place. It is also important to note that not all defects will suffice for the purposes of the act. The act introduces a consumer expectation test, where the product must not have a defect where “the safety of the product is not such as persons generally are entitled to expect”[5]

The benefit of the CPA is obvious – There is no requirement to show fault, nor are there any privity issues with which to contend. The regime opens up a whole range of liability targets, from the manufacturer(s) in a supply chain, to the suppliers and retailers. However, there are also challenges. There is still a requirement to prove causation, although it is limited to proving the defects, and passing the consumer expectation test. And as in tort, the act purports to cover claims for real damage to property or person, so yet again, pure economic loss is excluded. In the context of autonomous systems, which inherently will have software at its core, there is also a question of the definition of ‘product’. Although an autonomous car will almost certainly be covered by the act, there will be many AI products which will not. Product is defined as ‘any goods or electricity and includes products aggregated into other products, whether as a component part, raw materials or otherwise’[6]

The act is not clear on whether software is included in the definition of product. Software generally is not treated as a “good” under English law, however one might construe software embedded into hardware as passing the test.

Defences against Strict Liability under the CPA

The strict liability regime sets out a set of statutory defences which makes a claim less predictable still. We shall not consider them all in detail, but there are two that are particularly relevant to the class of products comprising artificial intelligence.

Development Risk Defence

The development risk defence is set out in the Consumer Protection Act 1987 paragraph 4 and states “that the state of scientific and technical knowledge at the relevant time was not such that a producer of products of the same description as the product in question might be expected to have discovered the defect if it had existed in his products while they were under his control;’ [7].

The defence has had a somewhat mixed judicial treatment. Considering the judgment of Burton J in the case of A and Others v The National Blood Authority and Others[8], he says

‘ If a standard product is unsafe, it is likely to be so as a result of alleged error in design, or at any rate as a result of an allegedly flawed system. The harmful characteristic must be identified, if necessary with the assistance of experts. The question of presentation/time/circumstances of supply/social acceptability etc. will arise as above. The sole question will be safety for the foreseeable use. If there are any comparable products on the market, then it will obviously be relevant to compare the offending product with those other products, so as to identify, compare and contrast the relevant features. There will obviously need to be a full understanding of how the product works — particularly if it is a new product, such as a scrid, so as to assess its safety for such use. Price is obviously a significant factor in legitimate expectation, and may well be material in the comparative process. But again, it seems to me there is no room in the basket for:
i. what the producer could have done differently:
ii. whether the producer could or could not have done the same as the others did.’

In other words, under article 6 of the EU Product liability Directive 1985[9], the producer has a strict liability, regardless whether he could have done anything to influence the outcome of the risk. If the risk was foreseeable, then the manufacturer is strictly liable. On the other hand, the approach was doubted by Hickinbottom J in the case of Wilkes v Depuy International[10]  where he says

The proper approach was not that in A v National Blood Authority (No.1) [2001] 3 All E.R. 289 which concentrated on causation without first identifying whether there was a defect. The focus of the Directive and the Act was on defect. Addressing causation at such an early stage distracted from that focus, National Blood Authority doubted (see paras 54-58 of judgment).

The judgment further concluded

Defects and product safety - The Act and the Directive focused on the condition or state of the product, not on the acts or omissions of those involved in production. That was fundamental to the move away from fault-based liability. Their concern was with safety, not with fitness for purpose. Safety was a relative concept. Expected standards of safety were incapable of precise definition because no medicinal product could be absolutely safe; the potential benefits had to be balanced against the risks. Both the Directive and a subsequent EU report on product liability deliberately declined to define "defect", it being envisaged at that time that guidance would be provided by a body of developing case law. No such body of law had developed, but the fact that matters were expected to be dealt with on a case-by-case basis indicated that the test for safety required an objective approach. Such an approach involved the court assessing the appropriate level of safety at the time that the relevant manufacturer first put the product on the market, taking into account the information and circumstances before it. It did not involve considering the safety expectations of a particular patient or of the general public (paras 63-65, 69-72, 74).

Autonomous systems can accurately be described as ‘state-of-the-art’ – It is an emerging discipline, leveraging deep-learning technologies that have been utilised commercially for a relatively short period of time. It stands to reason then, that if a claimant were to be faced with a defence under s4(1e), the outcome of the claim is at best uncertain, and one must consider whether such a claim gives a realistic prospect of success any more than a claim in negligence.

Subsequent Product Defence

The subsequent product defence is also set out in paragraph 4, and states “that the defect—(I) constituted a defect in a product (“the subsequent product”) in which the product in question had been comprised; and

(ii) was wholly attributable to the design of the subsequent product or to compliance by the producer of the product in question with instructions given by the producer of the subsequent product.”

In the event that a claimant is litigating against a manufacturer of a component which might have failed or the artificial intelligence software itself potentially, then the claimant would need to prove causation linking the grounds for the claim to a defect of the product of the manufacturer in question – In other words, the claimant would need to prove fault, much in the same way as under negligence.

Therefore, although the concept of strict liability for product detects sounds like a promising route of litigation, the implementation is nuanced, and in the type of cases for which Artificial Intelligence is a consideration, it may offer little in the way of remedy for the claimant.

Causation in Autonomous Systems

As we can see, the existing liability frameworks all require some degree of causation. For defects arising from an error in manufacturing, programming or the like, where the defect can be traced, the existing liability frameworks offer a remedy. But what happens when a defect cannot be easily traced?

Autonomous systems distinguish themselves from traditional computer systems in that they are self-learning - So decision-making criteria changes over time, as a result of the machine ‘gaining experience’. So, going back to our example of the self-driving car whose primary parameter is to protect the passengers at all cost. If the car finds itself in an unavoidable position where it has to make the choice between protecting its passengers and injure or potentially kill a third-party; or avoiding the third-party and as a result injure or potentially kill the passengers, and it chooses the former, as a result of the parameters of its programming; could one really say that the injury was as a result of a defect? If we cannot, then a claim against the manufacturer would likely fail, as the chain of causation would be broken.

Res Ipsa Loquitur

There are some partial remedies in tort, which go some way towards addressing this issue.  The doctrine of res ipsa loquitur, or ‘the thing speaks for itself’[11] has some applicability for our scenario. As held in the ICL case, res ipsa loquitur is an evidential rule for finding facts, where once facts give rise to an inference of negligence by the defendant, the evidential burden shifts to the defendant to establish facts negative to the inference.  One can see that such an evidentiary rule has applicability for our cases, and it was applied in the US case of Toyota Motor Corporation[12], where Toyota found some of their high-end Lexus models, for no apparent reason, accelerated suddenly and with no warning, in spite of the intervention of the drivers. Toyota were unable to pinpoint the root cause of the failures.

However, based on the existing liability frameworks within common law, we do not yet seem to have any remedy for the single incident where causation is not possible to ascertain.

The return of Tort with a vengeance

It is perhaps easy to conclude that because the problem space is evolving, and the traditional tests applied in negligence may be difficult to apply by a claimant, that the law is outdated and does not provide a remedy.  Such conclusion would be premature however. The law of tort provides remedies which will go some way in addressing the circumstances we are considering. Continuing our journey in autonomous cars, there are remedies available to the passengers of any such vehicle in tort through the Occupiers Liability Act 1957.  An autonomous car is a premise under OLA. The owner of the car will almost certainly be an occupier; but considering such vehicles are in contact with the manufacturer through an always-on internet link, one can argue for the purposes of the legislation, the manufacturer is also an occupier. There can of course be more than one occupier. [13] Therefore any visitor to the car will be owed a duty of care. If a visitor is injured as a result of a breach of such duty, then the occupier will be liable in tort. [14] There is of course some expectation that the visitor will take reasonable care[15], but within the remits of accidents caused by failure of the autonomous element itself, the manufacturer may well find themselves liable for as long as the software of the vehicle keeps the manufacturer informed of the condition and state of the vehicle on an ongoing basis. In other words, if a defect exists in the car, and it would have been possible to identify such defect, then the manufacturer might find themselves liable irrespective of consumer protection legislation.

The Occupiers Liability Act 1984 only give limited rise to a claim by a third-party who is not a visitor or a trespasser, but merely in the vicinity of the premises (whereas the 1957 act provided no protection to non-visitors; the 1984 act provide protection provided the occupier is aware of the danger or have reasonable grounds for believing it exists; he has reasonable grounds to expect the non-visitor to be present; and the danger is one which he may reasonably be expected to offer protection from).

There may be some remedy in public nuisance. The case of Dymond v Pearce[16] sets out that a remedy is available in public nuisance and that it is actionable if the nuisance caused damage to the public. However, as Lord Goff set out in Hunter v Canary Wharf Ltd[17],

“… although, in the past, damages for personal injury have been recovered at least in actions of public nuisance, there is now developing a school of thought that the appropriate remedy for such claims as these should lie in our now fully developed law of negligence, and that personal injury claims should be altogether excluded from the domain of nuisance.”

There is therefore a question mark around the ability of a third-party to recover damages in tort as a result of an injury where proving causation might prove complicated. This is however less of a concern since the mandatory insurance requirements of motorized transport will largely provide a remedy without any need to prove negligence. There are also developments on this area in future legislation, which we shall look at in more detail.

Public benefit test

The real consideration in reality is whether a system such as an autonomous car provides a public benefit to such an extent that the benefit outweighs the risk. There is no doubt that if computers were responsible for reacting to an unexpected event, the damage caused through road traffic accidents are likely to be a fraction of that caused by human drivers. Will there still be accidents? Yes, of course – However, the severity and frequency will likely be reduced. So the public benefit arguably outweighs the risks represented by such systems.

Resolving liability – The future of legislation

It is clear that the common law, as it stands, does not provide a comprehensive system for resolving the question of liability for intelligent systems failures. There are compelling public policy arguments for the introduction of a strict liability insurance model. An example might be the Accident Compensation Amendment Act (no 2) 1973 in New Zealand. Whilst not applicable to autonomous vehicles in particular, the act provides a compensation framework which addresses injuries also caused by autonomous systems, through road accident victims being automatically compensated at set tariffs and funded by the insurance premiums of motorists.

Academics have argued that intelligent systems must be type-certified and authorised in what is referred to as ‘Turing Registries’, named after the father of modern computing, Alan Turing. The proposal suggests a certification process, considering the level of intelligence and autonomy, and thus a higher premium payable to certify the system. [18] In return, the systems developers would benefit from a strict-liability insurance cover, which would compensate the victims in full in the event of a fault.

Parliament has taken some steps towards legislating for autonomous cars, and the Automated and Electric Vehicles Bill has at the time of writing, passed second reading in the House of Lords.  This bill addresses issues such as what type and model of autonomous vehicles are permissible on the road; liability of insurers where accidents are caused by autonomous vehicles, and consequences of unauthorised modification of software.

The Road Traffic Act 1988 does currently not address a scenario of an autonomous vehicle causing an accident.  The statutory insurance requirement set out in s143, fails to address that in the event of an accident with an autonomous vehicle there is no driver, and therefore the owner is a de facto passenger of his vehicle.  Under normal circumstances, the driver would not be able to make a claim for personal injury against his insurance company, since the function of the insurance is to indemnify the insured against any liability he might find himself in following an insured event. For it to be possible to do so, he would need to take legal action against himself, which of course is entirely impossible.  This has been addressed by the bill, and for insured vehicles, where the insured person is injured as a result of an accident, the insurer will be liable for personal injury damages not only to third-parties, but also for the insured person themselves.

Conclusion

At first glance, it would seem that autonomous systems might represent a source of liability that the current liability framework fails to provide a remedy for. It is true that causation makes for a difficult argument in some cases. The very nature of establishing a claim in tort in relation to a software defect is complicated; and the vastness of possible avenues of failure in a ‘intelligent’ system makes this more difficult. But as we have established, there are remedies available, albeit maybe somewhat obscure ones; or where the law has to be applied in a creative way to reach the desired outcome.

Ultimately, the public benefit of the technology is what will be the deciding factor as to whether a manufacturer will face crippling liability issues, or whether the law will allow the product to thrive. As per Hickinbottom J in Wilkes v Depuy,

“given that no medicinal product is free from risk, and thus “safety” in this field is inherently and necessarily a relative concept, a medical device will only be allowed onto the market if the product is assessed as having a positive risk-benefit ratio, in this sense. In this judgment, otherwise required, I shall use the term “risk-benefit” rather than “risk-utility”, on the basis that, for these purposes, “benefit” includes “utility”.”.

There is no such thing as no risk in the use of autonomous vehicles, but equally there is arguably an even greater risk in using a car manned by a fallible human driver. So, such a machine (either the old or new) would only be allowed onto the market if the product is found to have a positive risk-benefit ratio. This essentially sums it up. At the point where a manufacturer produces a vehicle, machine or system where such ratio tips in the opposite direction, it might well find itself in a position where any defence to a product liability claim might fall short. In other words, the law has sufficient remedies to deal with the issue of autonomy without having to break significantly new ground to do so.


[1] Donoughue v Stevenson [1932] A.C 5.6.2

[2] [1932] A.C 5.6.2

[3] Caparo Industries plc v Dickman [1990] 2 A.C. 605

[4] 85/374/EC

[5] Consumer Protection Act 1987 S3(1)

[6] S1(2(c))

[7] CPA 1987 s4(1e)

[8] 2001 WL 239806

[9] 85/374/EEC

[10] 2018 2 WLR 531

[11] Thomas v Curley [2013] EWCA Civ 117, David T Morrison & Co Ltd v ICL Plastics Ltd [2014] UKSC48

[12] (2013) W: 5763178 (Texas)

[13] Wheat v Lacon [1966] A.C 552

[14] Ward v Tesco Stores [1976] 1 W.L.R 810

[15] Laverton v Kiapasha  [2002] EWCA Civ 1656

[16] [1972] 1 Q.B 496

[17] [1998] 1 WLR 434

[18] Liability for Distributed Artificial Intelligences, Karnow, Berkely Technology Law Journal 1996 Vol 11.1