WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2026 Poal.co

1.0K

This a conversation my brother always brings up, and it always pisses me off every time. apparently there will come a time when an ai car will need to make a decision between killing a school bus full of kids or a group of grandmas or something. this always was interesting, because i've driven for over a decade now, and i have never, even come close to being forced into such a situation. and how aggressive are these cars driving that they are forced into such situations? however, the conversation does something more sinister, it stack ranks people based off bullshit parameters.

This a conversation my brother always brings up, and it always pisses me off every time. apparently there will come a time when an ai car will need to make a decision between killing a school bus full of kids or a group of grandmas or something. this always was interesting, because i've driven for over a decade now, and i have never, even come close to being forced into such a situation. and how aggressive are these cars driving that they are forced into such situations? however, the conversation does something more sinister, it stack ranks people based off bullshit parameters.

(post is archived)

[–] [deleted] 1 pt (edited )

It sounds like your brother simply finds the discussion interesting. When AI is trained one area it can be trained in is to minimize harm through its actions. This isn't as narrow as choosing between a bus full of kids and grandmas.

For example, I was going down the freeway and the cars were stopped in front of me. It was raining. I didn't have enough distance to stop. I knew I would hit the car in front of me. So instead I swerved into the barrier on the side of the road.

Choices like this have to be made. You can forego the training and let the AI pick--and it will pick based on other factors--or you can train it to make 'moral' decisions. ie do you kill a bus full of kids or a bunch of grandmas.

Your argument seems to be that such situations are so rare that we can discount them. I disagree. With every action comes potential risk to yourself or to others. We should have a discussion on how AI should balance risk, and when unavoidable, how it should choose who gets hurt.