It doesn't say so in the article but from it's output I almost guarantee it's another lazy low effort attempt to point GPT3 at something. You always get buggy results like that with such projects. And I'm not talking about the racism. The article gives some examples that don't make sense.
GPT-3 is supposed to be this miracle where it's pre-trained to understand language inputs and be able to perform language outputs, and then you train it an inch more for your specific application, and it's sometimes but rarely able to do some impressive things mostly by chance that people imbue on it the status of a ground shattering break-through. But 90% of projects return a mix of noise and sense, and to 50% that seems sensical gets people too excited, and people think that something truly genius is happening with the other 50%. People are so ready for AI to be smarter than us that when its not people still want to make excuses for it, like we aren't smart enough to understand its intelligence.
The sad thing is that such a lazy project that clearly didn't work got an article written about it.
What AI is way better at is statistical inference, which is what it's actually built for. Predicting likelihood of being a criminal based, whether or not someone will be a good hire. AI is really good at that. When you throw language models at an AI you are losing 80% of your accuracy at best. AI is also fast, so it's advantage is that you can query it repeatedly at a high rate. As in you want to incorporate it into other software. Having one software build sentences to feed into and AI to then have to parse sentences to make use of it, only to have access to a shitty AI is dumb. Take the language out of AI and suddenly it's actually productive, and often way more racist rather than by chance from the incoherent ramblings of GPT3.
(post is archived)