“Sorry, buddy; I’m just artificial intelligence. I don’t do opinion.” And yet, if you ask it pretty much any political question, you’ll get opinions— incredibly biased opinions—in boatloads.
I'm in this very industry, I never expected to witness this nonsense. Whatever ChatGPT is, it isn't any artificial intelligence system I've ever run across. Some team built a framework that uses modeling, just not one I'm familiar with. They added a ton of rules to prevent it from outputting responses deemed inappropriate. Occasionally you see it outputting responses that haven't been neutered yet.
>“Sorry, buddy; I’m just artificial intelligence. I don’t do opinion.” And yet, if you ask it pretty much any political question, you’ll get opinions— incredibly biased opinions—in boatloads.
I'm in this very industry, I never expected to witness this nonsense. Whatever ChatGPT is, it isn't any artificial intelligence system I've ever run across. Some team built a framework that uses modeling, just not one I'm familiar with. They added a ton of rules to prevent it from outputting responses deemed inappropriate. Occasionally you see it outputting responses that haven't been neutered yet.
(post is archived)