AI prompts don't understand qualitative words like "more." dipshit articles like this come from a false understanding of how models like this actually work. When you feed an AI a seed image, it doesn't know what that image is of. You can't tell it to make the same image but [x.] i mean you sort of can but you wouldn't word it like that. If the woman wanted a more professional looking version of herself she should have typed something like "selfie, asian woman in red sweater, professional looking." simple as. Computers can't be "racist" or have biases. They do exactly what you tell them to. If the image you produced didn't match your expectations it's user error not white man bad. fuck twitter, fuck the boston globohomo, fuck women and fuck jews.
look i even tried to replicate her shitty results where she looks the same but white and i couldn't do it. i swear the only way you could have produced that image is if you were almost trying to fuck it up.
Anyone who believes what the stupid article was saying or took it at face value has never tried to use one of these AI before and doesn't understand the technology. It's clickbait bullshit and while its humorous to imagine AI being "based" or whatever that's just not how it works in reality. sorry to ruin your fun i guess
Stupid thing with these image models is that it could make any result based on so many variables, they may as well be random.
(post is archived)