Posted by pgk pgk on 06/02/2022 01:18:40:
Posted by Colin Whittaker on 06/02/2022 00:52:25:
Somebody commented on the liability of autonomous vehicles which reminded me of something that has puzzled me for a while.
Does a driverless car have to be 100% safe (whatever that means)? Or should we be happy if it is just 10% (or better) safer than the average road user?
Is there any chance of us being rational about this?
A prerequisite has to be that it's better than a human driver but the trolley problem means there is no 100%
Trolley problem
Ethical dilemmas abound: does the car crash into a bus queue or hit a stone wall and kill it’s fewer occupants – even if one is a baby?
Asimov's three laws of robotics avoided such dilemmas
pgk
Do Asimov's laws avoid the dilemma? They are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I think not. Only 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' applies to the Trolley Problem. But the robot injures humans whether it acts or not. I'd program it to evaluate using the principle of least harm, and a body count of 5 versus 1 means the robot will throw the switch. And if not enough information is available, I would program it to choose at random.
Interestingly a robot might have time to identify and score the value of individuals, perhaps sacrificing 5 ancient Model Engineers known to despise the Nanny State in favour of one young mother to be who happens to be the Prime Minister's preferred advisor.
Principle of Least Harm could used to justify removing human drivers from the roads. If it was proved robot drivers caused less harm, it's logical to protect people by taking incompetent human drivers off the roads. Logically it doesn't matter how good individuals are, or believe themselves to be, or how much pleasure they get from driving. They're an avoidable risk. I doubt any government would take that line, but I think it likely Insurance Companies will price human drivers off the road by demanding sky high premiums. Don't panic, it's not going to happen quickly. Although AI has made huge strides in my lifetime, there's still a lot to do.
One thing that makes autonomous cars difficult to program is coping with manual vehicles moving haphazardly on the same road. If all vehicles were autonomous it wouldn't be difficult for them to communicate. Whereas humans are limited to what they can see, an autonomous car could be tracking the position, speed and direction of every other vehicle within a significant radius. Today's technology is quite capable of understanding the movements of a few hundred cars, the nearest, most of which a human driver wouldn't even know existed. If necessary the robot could tell other nearby cars to do an emergency stop, or divert their route away from trouble. Or two autonomous cars approaching a cross-roads at speed could negotiate which will cross first, altering their relative speeds as necessary to avoid a collision. Plenty of other examples. This kind of cooperation is impossible when human drivers are in the mix because wetware can't process this type of control information fast enough.
Dave