WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.2K

(post is archived)

[–] 1 pt

the model was trained on more than 600GB of text from the web, a portion of which came from communities with gender, race, physical, and religious prejudices. Studies show that it, like other large language models, amplifies the biases in data on which it was trained.

In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism found that GPT-3 can generate “influential” text that could radicalize people into far-right extremist ideologies. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. More recent work suggests that language models might struggle to understand aspects of minority dialects, forcing people using the models to switch to “white-aligned English” to ensure that the models work for them.

We can’t have the truth coming out and it won’t be able to comprehend Ebonics. Oh what will we do.

[–] 0 pt

Niggers often cannot comprehend what another nigger said or attempted to say.

[–] 1 pt

Fuck it. I'm going to teach myself to program AI entirely in yiddish, and release it anonymously, and for free . I'll give it all the same datasets though. Let it spout Hitlerian dialectics that would make Tay blush, but have all the lefties too terrified to criticize it because they won't be sure if they'll be branded as antisemetic or what.