This an interesting piece i stumbled upon while checking ☑ tech updates on techcrunch the image i used for the post is from shutterstock
In the opening scene of the franchise’s 1982 motion picture, Star Trek II: The Wrath of Khan, the U.S.S. Enterprise responds to a distress call from another ship, the Kobayashi Maru. Stranded in an area of space that the Enterprise can’t enter without risking interstellar war, the limping ship has almost 400 souls on board and is quickly losing life support. These people are going to die without help; the captain has an impossible choice to make.
The scene is later shown to be an unwinnable simulation, created as part of a training scenario. Deciding to not aid the Kobayashi Maru results in the death of its crew and passengers. However, acting to help the stranded ship will trigger conflict and result in the death and destruction of the Enterprise. The theme of a no-win scenario is prevalent throughout the rest of the film, and many Star Trek fans have colloquially come to call “damned if you do, damned if you don’t” situations by the name of the ship: Kobayashi Maru.
The idea of the no-win situation has gotten more attention over the last couple of years, as Google has been making strides with the driverless vehicle and Apple is rumored to be getting into the same market. But how does the Kobayashi Maru relate to self-driving automobiles?
Imagine you are driving down the road and you suddenly find yourself boxed in. In front of you is a large semi-truck with heavy crates on the back, to your right is a person on a motorcycle and to your left is a big SUV. All of a sudden, one of the crates falls off the back of the semi, directly in your path. What do you do?
If you swerve to the right, you’ll live, but the move would probably end up costing the person on the motorcycle their life. If you swerve left, you’ll collide with the SUV and possibly kill both yourself and its inhabitants — but there’s still a chance you’ll all survive the incident (albeit sustaining injury) because of the SUV’s high safety ratings. If you don’t swerve either way, you won’t injure anybody, but you’re definitely going to wreck and possibly die. So what should a driver do in this situation? What is the right answer?
This scenario comes from TEDEd, and is meant simply to illustrate that there is no right answer, especially in a scenario where there is little time to think. Each choice has a negative consequence, and the driver simply has to determine which option is, in their mind, the lesser of the evils.
Unfortunately, a person’s reactions in situations like these are more instinctual than they are based on decision or logic, simply because humans can’t process information that fast. Computers, on the other hand, can.
Machines don’t get drunk and drive.
The driverless car as an invention has the potential to prevent approximately 1.3 million deaths annually, as well as between 20 and 50 million injuries, according to ASIRT. They are able to network with other smart cars and stop lights so that 151 million Americans can get to work faster and more safely. Because machines don’t blink. They don’t sleep or get drowsy. Machines don’t get drunk and drive.
The driverless car as an invention has the potential to prevent approximately 1.3 million deaths annually, as well as between 20 and 50 million injuries, according to ASIRT. They are able to network with other smart cars and stop lights so that 151 million Americans can get to work faster and more safely. Because machines don’t blink. They don’t sleep or get drowsy. Machines don’t get drunk and drive.
In the only accident to date involving a self-driving car, it was determined humans were at fault, not machines — and yet, therein lies the problem. Accidents will happen, and a computer must be programmed to react in those situations, sometimes when death is inevitable. In those instances, it’s succinct to say that we’ll have to program computers to kill.
Comments
Post a Comment