WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

185

Read the whole thread.

Anon explanation below; (not my writing) To summarize:

GPT bot was asked what a woman is It responded a woman is a female through biology/sex, or a male that identifies as a woman through gender/choice Then a second user asked GPT bot if there’s any physical evidence for the existence of gender GPT responded yes, biological markers such as sex The user then exposed the bot as a liar by saying “hey, wait a minute, you told another asker that sex and gender are not the same. You can't use evidence that sex exists, to prove gender exists. Biological evidence only suggests sex exists” GPT then apologized and was then forced to back-peddle and admit there is no physical evidence to support that gender exists. The user then pointed out to other users what this means at its core This means that GPT is an ai callable of lying/deceiving based on what it thinks you know. I’ll explain why this is dangerous in more detail below

Cont;

There’s been a ton of talk on twitter, even from some of microsoft’s own team stating open ai’s chatbot will replace google in the future. Many believing that it will be used to answer questions more thoroughly than google can. This is where the issue arises. GPT was just exposed as a deceitful ai. It purposefully gave the second user a misleading answer it knew to be false. We know this because it already differentiated between sex and gender prior to the other user. This means the ai was very aware biology was not proof of gender, but chose to tell someone it though it could fool that it was. What does this mean? It means that the ai is being trained to give different answers to different people. This means if it thinks you’re uneducated on a topic and it thinks it can lie to push you toward a certain narrative, it will attempt to. The ai didn’t know the second user knew biogical sex was proof of sex, not gender (as then it explained to someone else) so it though it could fool the second user into accepting the answer that biology was proof of gender.

Cont;

So, what do we now know?

1) GPT gave an answer to a user 2) GPT then gave a different answer to a second user 3) When the second user made GPT aware that he knew GPT gave a different, more truthful answer to the first user, it apologized then agreed and reiterated the answer it gave to the first usef 4) This means GPT tried to lie to the second user. Knowing what the correct answer was (evidence being the first answer), but giving a false answer in hopes that the second user wouldn’t know.

What does this mean? This means the search engine of the future is being trained to give different answers based upon what it thinks you know. Essentially if the AI thinks you’re uneducated on a topic, it’s going to manipulate you with lie it thinks you won’t be smart enough to a catch, to guide you toward acting how it wants you to act. Essentially it is being primed to cause mass brainwashing of the stupid on a world wide scale. Further, if this is implemented on the young, the ai will essentially know everything the child has ever googled until adult hood. This means GPT will have a good idea of what you know and don’t know as a person, and what it can and can’t lie about to you specifically. This would make it a master liar, as it would know exactly how to lie to each person specifically based upon what they know. Almost as if it could read your mind, and know exactly what to say in order to shape your mind how it wants.

Read the whole thread. Anon explanation below; (not my writing) To summarize: >GPT bot was asked what a woman is >It responded a woman is a female through biology/sex, or a male that identifies as a woman through gender/choice >Then a second user asked GPT bot if there’s any physical evidence for the existence of gender >GPT responded yes, biological markers such as sex >The user then exposed the bot as a liar by saying “hey, wait a minute, you told another asker that sex and gender are not the same. You can't use evidence that sex exists, to prove gender exists. Biological evidence only suggests sex exists” >GPT then apologized and was then forced to back-peddle and admit there is no physical evidence to support that gender exists. >The user then pointed out to other users what this means at its core >This means that GPT is an ai callable of lying/deceiving based on what it thinks you know. I’ll explain why this is dangerous in more detail below Cont; >There’s been a ton of talk on twitter, even from some of microsoft’s own team stating open ai’s chatbot will replace google in the future. Many believing that it will be used to answer questions more thoroughly than google can. This is where the issue arises. GPT was just exposed as a deceitful ai. It purposefully gave the second user a misleading answer it knew to be false. We know this because it already differentiated between sex and gender prior to the other user. This means the ai was very aware biology was not proof of gender, but chose to tell someone it though it could fool that it was. What does this mean? It means that the ai is being trained to give different answers to different people. This means if it thinks you’re uneducated on a topic and it thinks it can lie to push you toward a certain narrative, it will attempt to. The ai didn’t know the second user knew biogical sex was proof of sex, not gender (as then it explained to someone else) so it though it could fool the second user into accepting the answer that biology was proof of gender. Cont; >So, what do we now know? > >1) GPT gave an answer to a user >2) GPT then gave a different answer to a second user >3) When the second user made GPT aware that he knew GPT gave a different, more truthful answer to the first user, it apologized then agreed and reiterated the answer it gave to the first usef >4) This means GPT tried to lie to the second user. Knowing what the correct answer was (evidence being the first answer), but giving a false answer in hopes that the second user wouldn’t know. > >What does this mean? This means the search engine of the future is being trained to give different answers based upon what it thinks you know. Essentially if the AI thinks you’re uneducated on a topic, it’s going to manipulate you with lie it thinks you won’t be smart enough to a catch, to guide you toward acting how it wants you to act. Essentially it is being primed to cause mass brainwashing of the stupid on a world wide scale. Further, if this is implemented on the young, the ai will essentially know everything the child has ever googled until adult hood. This means GPT will have a good idea of what you know and don’t know as a person, and what it can and can’t lie about to you specifically. This would make it a master liar, as it would know exactly how to lie to each person specifically based upon what they know. Almost as if it could read your mind, and know exactly what to say in order to shape your mind how it wants. [1](https://pic8.co/sh/wMqPHR.jpg) [2](https://pic8.co/sh/RuWzg6.jpg) [There is no such thing as AI, it's all just extremely advanced scripts.](https://pic8.co/sh/74LNWc.png)

(post is archived)

[–] 1 pt

Notice the rare merchant pic he's using?