A solid, collective ethical framework developed by government, citizens and other stakeholders must be built to steer the introduction of new road rules for connected and automated vehicles (CAVs), international experts say.
They warn that strictly forbidding CAVs of various kinds to break existing traffic rules may hamper road safety, contrary to what most people may claim. However, this requires close scrutiny so these high-tech vehicles can meet their potential to reduce road casualties.
“While they promise to minimise road safety risk, CAVs like hybrid AI systems can still create collision risk due to technological and human-system interaction issues, the complexity of traffic, interaction with other road users and vulnerable road users,” says UK transport consultant Professor Nick Reed, from , in a .
“Ethical goal functions for CAVs would enable developers to optimise driving behaviours for safety under conditions of uncertainty while allowing for differentiation of products according to brand values.”
This part is important since it does not state that all vehicle brands should drive in exactly the same manner, which still allows brand differentiation, researchers say.
Around the world, transport services are already putting CAVs, including driverless cars, on the road to deliver new services and freight options to improve road safety, alleviate congestion and increase drive comfort and transport system productivity.
A recent European Commission report recommended CAVs may have to break strict traffic rules to minimise road safety risk and to operate with appropriate transparency.
Professor Reed, Flinders University Dean of Law Professor Tania Leiman, and other European experts in the field of autonomous vehicle safety the key recommendation from the EC (Bonnefon et al, 2020) highlights the need to legislate for CAVs in various traffic and environmental conditions to exercise the equivalent of human discretionary behaviours.
This is complicated even further where there are variations in laws across different countries and regions, and cars are designed or built in one country but used in another, says Flinders Professor Leiman, from the College of Business, Government and Law.
“An automated system that has ‘deduced’ driving behaviour from training examples cannot ‘explain’ or ‘justify’ its decisions or actions in a dangerous encounter,” she says.
“This may be a problem if a manufacturer is required to explain specific behaviour in case of an incident or where civil or criminal liability is disputed.”
Speeding and mounting the curb to avoid collision are also evaluated as case studies in the research paper, with the idea that ethical goals should be established by extensive public consultation and deliberation to make them publicly acceptable and understood.
The researchers say it will be critical that there is a standardised framework to enable vehicles travelling from one jurisdiction into the next to update their road rules to make driving standards safe, predictable, reasonable, uniform, comfortable and explainable – both for drivers, manufacturers and all road users.
“We suggest responsibility for creating the framework of CAV ethical goal functions should sit with an appropriate international body, for example, the Global Forum for Road Traffic Safety of the UNECE, and relevant individual country agencies such as the Department of Transport,” says co-author on the paper Dr Leon Kester, senior research scientist at TNO, The Netherlands.
“Once an ethical goal function has been agreed and enacted by legislators, CAV systems could be designed in such a way that they optimise with the highest utility for road users within predefined boundaries without having a predefined set of infinite scenarios and precise definitions on what to do,” Dr Kester says.
“Also, we have to organise a socio-technological feedback loop where things can be evaluated and changed if we feel it is no longer according to our societal goals.”
The article, (2021) by Nick Reed, Tania Leiman, Paula Palade, Marieke Martens and Leon Kester has been published in Ethics and Information Technology DOI: 10.1007/s10676-021-09614-x