WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2026 Poal.co

305

This a conversation my brother always brings up, and it always pisses me off every time. apparently there will come a time when an ai car will need to make a decision between killing a school bus full of kids or a group of grandmas or something. this always was interesting, because i've driven for over a decade now, and i have never, even come close to being forced into such a situation. and how aggressive are these cars driving that they are forced into such situations? however, the conversation does something more sinister, it stack ranks people based off bullshit parameters.

This a conversation my brother always brings up, and it always pisses me off every time. apparently there will come a time when an ai car will need to make a decision between killing a school bus full of kids or a group of grandmas or something. this always was interesting, because i've driven for over a decade now, and i have never, even come close to being forced into such a situation. and how aggressive are these cars driving that they are forced into such situations? however, the conversation does something more sinister, it stack ranks people based off bullshit parameters.

(post is archived)

[–] 0 pt

Here's another solution.

Only swerve if the you can miss everything. If you can't miss, stay in the road.

The reason for this is to keep the danger in the road. A pedestrian on the sidewalk will never have to worry about being hit by an AI car. The person in the road is in the wrong, always a little to blame.

Going back to the road, we can build barriers on highways. We can put up fences on roads. We can engineer safety into the roads because we know the road is the dangerous area, not the sidewalk. The sidewalks will always be safe, if this rule is followed.

If people are getting hit by the AI cars at a certain place, we can identify what the problem is. Maybe the lights at a stop light aren't long enough. Maybe the road has poor visibility on the sides.