Self driving cars: the new trolley problem

Content Warning: this article discusses motor vehicle fatality

Written by Jessie Wen

The rise of autonomous vehicles has made exciting waves in the automobile industry. Yet, the increased tendency for Artificial Intelligence (AI) systems to replace human actors has brought a new set of legal and ethical challenges to the table.

Whilst the list is long, this article will focus on some of the more fundamental ethical dilemmas that arise with self-driving cars and AI automation. The relevance of the traditional trolley problem, as a thought experiment, has seemed increasingly divorced from the challenges of everyday life but now can be reimagined in the context of autonomous vehicles. It presents a variety of issues for both programmers and consumers to consider — and at its core is a question of moral choice and ethical judgment on who to save. The answer to which is not so clear cut, and can often be complicated by the nuances of cultural thought and opinion. Alongside these issues, policymakers have been considering the legal implications of this advancing technology and have taken steps to assign liability to manufacturers, indicating a preference to hold them accountable above the everyday consumer.

What counts as a self-driving car?

The Society of Autonomous Engineers (SAE), a global standards development body in mobility engineering, sets out five levels of driving automation, with human control being relinquished a little more at each level. Level 5 Full Automation involves ‘full time performance by an automated driving system of all aspects... of the driving task under all…conditions that can be managed by a human driver’. Through in-built Intelligent Transport Systems (ITS), the vehicle is able to communicate with other vehicles and infrastructure (such as traffic signals) to assess road conditions and make decisions in order to avoid collisions and other accidents.

Currently, a vast majority of commercial vehicles already use driver assistance technology, such as blind spot detection and forward collision warnings. The combination of human drivers with AI assistance has proven beneficial, with reports indicating a decrease in fatal crashes by approximately 30%.[1] A multitude of risks arise when getting from Point A to B. Assistance systems are able to cut down on decision making, and free up the individual behind the wheel to intervene and make the crucial choices. Use of the technology has been said to promote human rights and inclusion, increasing the mobility for people with a disability, such as those with vision impairments.[1]

However, less has been said about the more fatal decisions self driving cars will inevitably make - those that involve life and death. It begs the question, where can and should we draw the line on the decisions algorithms make for us?

The moral dilemma - who gets to be saved?

In 2018, 49 year old Elaine Herzberg was struck dead by a self-driving Uber in Arizona after illegally crossing the street at a non-pedestrian intersection. The incident was reported by local police as ‘entirely avoidable’ if the driver behind the wheel was alert and had intervened before the collision.

However, jaywalking is not an uncommon phenomenon. If a similar situation were to arise, and the car could only hit the pedestrian, or swerve into oncoming traffic and harm the driver, what choice should the autonomous car make?
The age-old trolley problem has prompted vast discussion since its inception in 1967.

The paradigm involves a number of scenarios taking place on a set of train tracks: a ‘trolley’ (tram) is running along the tracks set to collide and kill 5 people. However, with the pull of a lever, the trolley can be diverted onto another track, where only 1 person lies waiting. Each ethical scenario differs in its details, including the makeup of the people lying on the tracks.

Gauging responses to the trolley problem becomes more complex when applied across different cultural demographics and countries. An experiment conducted by the Nature Journal, coined ‘the Moral Machine’, examined 40 million decisions made by people across 233 countries and territories when faced with a number of autonomous driving scenarios. Where death was inevitable, participants were quizzed on their choice to save or sacrifice pedestrians. The survey showed interesting results, with those from historically Christian cultural dominant backgrounds showing a higher preference to saving younger lives over older, than those from traditionally Islamic or Confucion backgrounds. Clearly, the ‘social consensus’ is not universal. With driverless vehicles being implemented globally, programmers will need to examine the nuances of cultural thought applied to moral ethics, and the different preferences of those groups when faced with the same dilemma.

Another concerning issue involves the realities of algorithmic bias during the programming and development of autonomous vehicles. In 2019, researchers from the Georgia Institute of Technology found a performance discrepancy with detecting pedestrians of different skin colour, where vehicles were more likely to detect and avoid collisions with those with lighter skin tones than darker. The issue arose within the data set used to train the object detection models - with more examples of light skinned individuals, the models had a greater difficulty in recognising dark skinned individuals. Whilst algorithmic discrimination is a universal challenge, the lack of due diligence to mitigate biases in the context of autonomous vehicles can result in potentially fatal consequences.

Current Law

Currently there are no laws in Australia that govern the commercial use of automated vehicles on our roads. If there were, legislation that could be amended include the Motor Vehicle Standards Act 1989 (Cth)  legislated by the Federal government, which provides a legal benchmark for the physical structure and manufacture of vehicles based on international safety standards. The regulation of road safety and culpable driving offences are delegated to states and territories; In Victoria, this is governed by statute including the Road Safety Act 1986 (Vic) and Crimes Act 1958 (Vic).

However, reform has begun in the Victorian Parliament following the introduction of the Road Safety Amendment (Automated Vehicles) Bill 2017. The bill proposes a Vicroads permit scheme for the safe trial and monitoring of automated vehicles on roads.  To obtain a permit, manufacturers and other legal entities possessing automated technology must demonstrate that relevant ‘safety management mechanisms’ exist within automated vehicles for operation.[1]

Relevantly, the scheme differentiates between the permit holder and the ‘vehicle supervisor’ (driver), and holds the permit holder accountable whilst the vehicle is in automated mode. This clarifies legal liability and insurance related issues where road accidents may occur. It may further mitigate the ‘development risk’ defence currently available under Australian consumer law, where manufacturers are not liable for safety defects by showing they lacked the relevant knowledge at the time of manufacture for the safety defect to be discovered.

Moving forward

In 2021, the Australian Human Rights Commission published its report analysing the impacts of new technologies on human rights. Importantly, one of its key recommendations include the establishment of an AI safety commissioner that is able to audit the development and increased use of AI technology in Australia. It remains to be seen whether or not the recommendation will be implemented by the Federal government.

It's clear that the future of driverless cars in Australia will remain a mystery for some time. Whilst the vehicle technological capabilities are present, the regulatory and legislative frameworks require further work. As manufacturers and consumers will be waiting on policymakers to deal with the plethora of questions present, it seems like getting from Point A to B isn’t so easy after all.



[1] https://www.roadsafety.gov.au/nrss/fact-sheets/vehicle-safety; https://www.whichcar.com.au/news/australian-study-highlights-benefit-of-modern-safety-tech-in-cars

[2] L. Donellan (2017) 'Statement of compatibility: Road Safety Amendment (Automated Vehicles) Bill 2017', Debates, Victoria, Legislative Assembly, 15 November, p. 3823.

[3] http://classic.austlii.edu.au/au/legis/vic/bill_em/rsavb2017381/rsavb2017381.html

Previous
Previous

Ethical Implications of AI in Healthcare

Next
Next

Generating Abstract Artwork with Python