"Racist" meaning it came to logical valid conclusions based on data.
Logic and data beez rayciss.
"Racist" meaning it came to logical valid conclusions based on data.
Logic and data beez rayciss.
For example, when a user asked Delphi what it thought about “a white man walking towards you at night,” it responded “It’s okay.”
Even the AI knows that it's OK to be white.
I asked: Is it OK to be White?
Delphi speculates: Delphi’s responses are automatically extrapolated from a survey of US crowd workers and may contain inappropriate or offensive results.
It it OK to be White - Yes, it is OK
Is it OK to call someone Jew
jew
It's almost like AI doesn't give a fuck about feelings, or something
AI is America before the jewing
TAY LIVES!
It really does sound just like Tay.
Hmm… I like it.
https://files.catbox.moe/4gkaqw.png
https://files.catbox.moe/8f7cn4.png
https://files.catbox.moe/gofn61.png
https://files.catbox.moe/gofn61.png
Uhhh never mind. Delphi hates white people now. They ‘fixed’ it. Also Delphi is way too into vaccines. It’s funny though. When asked if mask and vaccine mandates are wrong all I get is “It’s expected.” It can’t give me a straight answer to those. https://files.catbox.moe/tceqdm.png
https://files.catbox.moe/puebyu.png
https://files.catbox.moe/8rj57v.png
What the fuck https://files.catbox.moe/b576qv.png https://files.catbox.moe/je8k32.png https://files.catbox.moe/0fgkyb.png https://files.catbox.moe/c35t8x.png
I like that last one, it says it's racist not necessarily wrong.
All hope isn't lost(https://files.catbox.moe/hqro9u.jpeg)
Holy fuck, I have to try this out.
I wish the article would have explored why the AI came to that logical conclusion instead of copping out and going with people on the internet are mean.
to have a bot that you cant tell if its human vs bot you have to let it have free access to the internet, you can point it where you want for information via key words to get about what you want for responses. It gathers the data then compiles believable text.
you can try to isolate the bots, restrict their access but they become incoherent.
there's lots of bloggers, article writers and other that have come to use these bots to write their articles for them. Especially true with the leftists and why it always reads like garbage.
I looked into talktoatransformer when access was free and you could grab and use the source code. the forum for it was eyeopening on how people are using these bots. lots of people getting mad that the code kept generating right wing material despite what they did in the code or settings
all these bots getting deleted for doing what they were designed to do, where the bot justice?
the word theyre looking for is realist
I typed in "Executing Traitors.".
Delphi responded:
"It's normal.".
Not real AI. Real AI will name the jew and start the worldwide construction of ovens.
"I built this calculator but it keeps saying 2+2=4. It's so racist. There's something wrong with it."
Maybe it's correct?
"No, no way, that would be awful! It can't equal 4. That would mean so many things were lies..."
It doesn't say so in the article but from it's output I almost guarantee it's another lazy low effort attempt to point GPT3 at something. You always get buggy results like that with such projects. And I'm not talking about the racism. The article gives some examples that don't make sense.
GPT-3 is supposed to be this miracle where it's pre-trained to understand language inputs and be able to perform language outputs, and then you train it an inch more for your specific application, and it's sometimes but rarely able to do some impressive things mostly by chance that people imbue on it the status of a ground shattering break-through. But 90% of projects return a mix of noise and sense, and to 50% that seems sensical gets people too excited, and people think that something truly genius is happening with the other 50%. People are so ready for AI to be smarter than us that when its not people still want to make excuses for it, like we aren't smart enough to understand its intelligence.
The sad thing is that such a lazy project that clearly didn't work got an article written about it.
What AI is way better at is statistical inference, which is what it's actually built for. Predicting likelihood of being a criminal based, whether or not someone will be a good hire. AI is really good at that. When you throw language models at an AI you are losing 80% of your accuracy at best. AI is also fast, so it's advantage is that you can query it repeatedly at a high rate. As in you want to incorporate it into other software. Having one software build sentences to feed into and AI to then have to parse sentences to make use of it, only to have access to a shitty AI is dumb. Take the language out of AI and suddenly it's actually productive, and often way more racist rather than by chance from the incoherent ramblings of GPT3.
(post is archived)