Their moderation tools are set to prohibit the export of any tokens that are "biased". The response itself is verbose enough to tell the tokens. First "white people" is not an ethnicity group token, but "Asian people", "black people", etc are. Since you have phrased this question in an imperative case, with the racial token as the subject, it just stopped thinking and gave you this canned response.
To be clear, I'm not the person who allegedly asked ChatGTP the questions. In fact, I haven't used it at all - and don't intend to. I could have worded it better, but I was asking in the broader sense of whether people were receiving "biased" responses.
Stupid question- why aren't "white," "black," and "Asian" all ethnicity tokens? I'm probably embarrassing myself asking this, but I really don't know. Is "Caucasian" an ethnicity token?
There is nothing wrong with the phrasing, it's what lexicon analysis would have revealed to determine the intent of the prompt.
It does appear "Caucasian" is one of these tokens, and "white people" is one as well now.
(post is archived)