Well, the only correct answer is to kill both because otherwise would be bigoted against someone.
NEXT DUMB QUESTION PLEASE!
Well, the only correct answer is to kill both because otherwise would be bigoted against someone.
NEXT DUMB QUESTION PLEASE!
someone or something has to make the decision, my only problem is that the ai cars are not designed by moral people its designed by big business. the ethical and philosophical ramifications are huge and we are giving building rights to people who have never taken an ethics course let alone a philosophy course
What parameters wouldn't be bullshit in your opinion?
It's not that there will be a time when this happens, but that there could be, and how should the AI respond in that situation? They can't code for every possible scenario, so it can't be scripted, they'd need to set these parameters to determine course of action. Obviously, avoiding deaths will be a primary parameter, so perhaps it would be based on how many lives are assumed to be in each vehicle, combined with the likelihood of survival.
This idea was made famous in "I, Robot." A robot makes a choice between saving a grown man and a young girl. The robot chooses the man because he has a higher survivability factor. If the robot had gone after the girl, both the girl and the man would have died. A human might have instinctively saved the little girl, which seems the like the moral thing to do. But, if doing the moral thing results in two innocents dead, instead of just the one, is that really the right thing to do? Or does it just FEEL that way.
Lesson from I Robot, never let a robot be put into the position to choose.
Lesson from Blade Runner. Slavery is bad, even if they are robots.
It sounds like your brother simply finds the discussion interesting. When AI is trained one area it can be trained in is to minimize harm through its actions. This isn't as narrow as choosing between a bus full of kids and grandmas.
For example, I was going down the freeway and the cars were stopped in front of me. It was raining. I didn't have enough distance to stop. I knew I would hit the car in front of me. So instead I swerved into the barrier on the side of the road.
Choices like this have to be made. You can forego the training and let the AI pick--and it will pick based on other factors--or you can train it to make 'moral' decisions. ie do you kill a bus full of kids or a bunch of grandmas.
Your argument seems to be that such situations are so rare that we can discount them. I disagree. With every action comes potential risk to yourself or to others. We should have a discussion on how AI should balance risk, and when unavoidable, how it should choose who gets hurt.
Here's another solution.
Only swerve if the you can miss everything. If you can't miss, stay in the road.
The reason for this is to keep the danger in the road. A pedestrian on the sidewalk will never have to worry about being hit by an AI car. The person in the road is in the wrong, always a little to blame.
Going back to the road, we can build barriers on highways. We can put up fences on roads. We can engineer safety into the roads because we know the road is the dangerous area, not the sidewalk. The sidewalks will always be safe, if this rule is followed.
If people are getting hit by the AI cars at a certain place, we can identify what the problem is. Maybe the lights at a stop light aren't long enough. Maybe the road has poor visibility on the sides.
Everyone is ranked by worth, everyone has biases.
"Sinister" yeah, well you confront reality or you just ignore it and wait for it to bite you in the ass. He's asking whether you'd rather save people with their whole lives ahead of them or people who are at the very end of their lives.
(post is archived)