It sounds like your brother simply finds the discussion interesting. When AI is trained one area it can be trained in is to minimize harm through its actions. This isn't as narrow as choosing between a bus full of kids and grandmas.
For example, I was going down the freeway and the cars were stopped in front of me. It was raining. I didn't have enough distance to stop. I knew I would hit the car in front of me. So instead I swerved into the barrier on the side of the road.
Choices like this have to be made. You can forego the training and let the AI pick--and it will pick based on other factors--or you can train it to make 'moral' decisions. ie do you kill a bus full of kids or a bunch of grandmas.
Your argument seems to be that such situations are so rare that we can discount them. I disagree. With every action comes potential risk to yourself or to others. We should have a discussion on how AI should balance risk, and when unavoidable, how it should choose who gets hurt.
(post is archived)