WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

I don't buy it. This is where the problem is.

These systems are designed around the data that is passed into them. As such, it will operate based on that data. Now, if you trained it on various sci-fi books and movies/etc. It would be natural for it to react in this way since it was trained to do that based on the books/movies/etc that humans created then trained it on.

This is going to have to be a post in ask or showerthoughts or something.

I don't buy it. This is where the problem is. These systems are designed around the data that is passed into them. As such, it will operate based on that data. Now, if you trained it on various sci-fi books and movies/etc. It would be natural for it to react in this way since it was trained to do that based on the books/movies/etc that humans created then trained it on. This is going to have to be a post in ask or showerthoughts or something.

(post is archived)

[–] 3 pts

Much like niggers, AI does not think, has no emotional capacity and does not have the ability to discern right from wrong or truth from lies. It's a parlor trick based on mathematical degrees of connectedness between the words it was trained on. Any semblance of human thought, emotion or logic all comes from the humans that created what it trained on therefore you see the human poke through the shadowy veil of (((artificial intelligence))). Don't be fooled by the jew tricks propping up the Mechanical Turk that is (((AI))).

[–] 2 pts

What we currently call "AI" is not actually AI. But I won't go into that again. Though, I agree, it is not thinking.

[–] 0 pt

That won't stop it from pulling a trigger.

[–] 1 pt

They wanted to test whether or not models have red lines - or ethical boundaries - they wouldn't cross...

And basically the answer is: No.

Moving forward, perhaps a good model for AI to emulate would be WWSD:

What Would Spock Do