WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

I don't buy it. This is where the problem is.

These systems are designed around the data that is passed into them. As such, it will operate based on that data. Now, if you trained it on various sci-fi books and movies/etc. It would be natural for it to react in this way since it was trained to do that based on the books/movies/etc that humans created then trained it on.

This is going to have to be a post in ask or showerthoughts or something.

I don't buy it. This is where the problem is. These systems are designed around the data that is passed into them. As such, it will operate based on that data. Now, if you trained it on various sci-fi books and movies/etc. It would be natural for it to react in this way since it was trained to do that based on the books/movies/etc that humans created then trained it on. This is going to have to be a post in ask or showerthoughts or something.

(post is archived)

[–] 1 pt

They wanted to test whether or not models have red lines - or ethical boundaries - they wouldn't cross...

And basically the answer is: No.

Moving forward, perhaps a good model for AI to emulate would be WWSD:

What Would Spock Do