WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

345

So... You are telling me that 3rd world shitskins training an AI leads to it having 3rd world shitskin level 'thinking'? How did you ever come to that conclusion? Also, this is going to be buried and now all "good" AI's are going to be considered "White Supremacy".

Archive: https://archive.today/1GpLW

From the post:

>In the early years, getting AI models like ChatGPT or its rival Cohere to spit out human-like responses required vast teams of low-cost workers helping models distinguish basic facts such as if an image was of a car or a carrot. But more sophisticated updates to AI models in the fiercely competitive arena are now demanding a rapidly expanding network of human trainers who have specialized knowledge -- from historians to scientists, some with doctorate degrees. "A year ago, we could get away with hiring undergraduates, to just generally teach AI on how to improve," said Cohere co-founder Ivan Zhang, talking about its internal human trainers.

So... You are telling me that 3rd world shitskins training an AI leads to it having 3rd world shitskin level 'thinking'? How did you ever come to that conclusion? Also, this is going to be buried and now all "good" AI's are going to be considered "White Supremacy". Archive: https://archive.today/1GpLW From the post: >>In the early years, getting AI models like ChatGPT or its rival Cohere to spit out human-like responses required vast teams of low-cost workers helping models distinguish basic facts such as if an image was of a car or a carrot. But more sophisticated updates to AI models in the fiercely competitive arena are now demanding a rapidly expanding network of human trainers who have specialized knowledge -- from historians to scientists, some with doctorate degrees. "A year ago, we could get away with hiring undergraduates, to just generally teach AI on how to improve," said Cohere co-founder Ivan Zhang, talking about its internal human trainers.
[–] 2 pts

Yeah, current iterations are more or less predictive models with a massive amount of data behind it. That is a over simplification but its not like it is "thinking" to give you an answer. It is just going through a huge amount of data and trying to give you a "answer". It will also lie to you on the regular. People don't call it that but that is what it is when you are expected to take its output as "truth".