Self-driving vehicles are all the talk! It would be nice to just be a passenger and let the car make all the driving decisions while you have time to unwind, or catch up on work, or chat away on your cell phone. We are told that fewer accidents will occur as our car will be talking to the vehicles around them to keep us all safe. That makes us feel better. Still…fewer doesn’t mean there won’t be some accidents.
Humans are often in tricky situations behind the wheel. We have a few seconds to try to decide which action to take to either prevent an accident, or if that isn’t possible, what action to take to cause the minimum of damage…to us and our passengers…our vehicle…and those sharing the highway with us. There could be an animal in the road, a pedestrian and a child on a bicycle to consider as well.
This is a very real issue and one that programmers of autonomous vehicles have to keep in mind. How can they program ethics into artificial intelligence? I was just reading an interesting article on this very subject and thought I would share some points that were made.
First of all, programmers would have to set “crash-optimization algorithms” based on human ethical intuitions. Of course, different people have different base ethics so most likely this will be decided by committee …who will be on that committee? That remains to be seen. It could be the automakers themselves, or the government.
Once these “set algorithms” are programmed then the buyer of the vehicle needs to know what to expect. Has the buyer bought a car that only is concerned with the owner of the vehicles safety and best interest? Is it only concerned with keeping the vehicle itself from harm? Has any “moral” or “legal” ramifications been programmed in?
How will the self-driving car “learn” human ethics?
We will keep you posted as we learn more!
Please call us with any questions at 626-963-0814 or visit our website at www.CertifiedAutoCA.com.
Hometown Service You Can Count On!