RESEARCH ARTICLE
The Trolley Problem Version of Autonomous Vehicles
Yair Wiseman*, Ilan Grinberg
Article Information
Identifiers and Pagination:
Year: 2018Volume: 12
First Page: 105
Last Page: 113
Publisher ID: TOTJ-12-105
DOI: 10.2174/18744478018120100105
Article History:
Received Date: 20/12/2017Revision Received Date: 05/02/2018
Acceptance Date: 25/02/2018
Electronic publication date: 13/03/2018
Collection year: 2018
open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Abstract
Introduction:
The Trolley problem is a very well-known ethics dilemma about actively killing one or sometimes even more persons in order to save a number of persons. The problem can occur in autonomous vehicles when the vehicle realizes that there is no way to prevent a collision, the computer of the vehicle should analyze which collision is considered to be the least harmful collision.
Method and Result:
In this paper, we suggest a method to evaluate the likely harmfulness of each sort of collision using Spatial Data Structures and Bounding Volumes and accordingly to decide which course of actions would be the less harmful and therefore should be chosen by the autonomous vehicle.
Conclusion:
The aim of this paper is to emphasize that the “Trolley Problem” occurs when the human driver is replaced by a robot and if a moral answer is given by an authoritative and legitimate board of experts, it can be coded in autonomous vehicle software.