the model was trained on more than 600GB of text from the web, a portion of which came from communities with gender, race, physical, and religious prejudices. Studies show that it, like other large language models, amplifies the biases in data on which it was trained.
In a paper, the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism found that GPT-3 can generate “influential” text that could radicalize people into far-right extremist ideologies. A group at Georgetown University has used GPT-3 to generate misinformation, including stories around a false narrative, articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation. More recent work suggests that language models might struggle to understand aspects of minority dialects, forcing people using the models to switch to “white-aligned English” to ensure that the models work for them.
We can’t have the truth coming out and it won’t be able to comprehend Ebonics. Oh what will we do.
Niggers often cannot comprehend what another nigger said or attempted to say.
(post is archived)