WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

260

Remember, things like GPT are like the cloud but probably worse. It is "someone else's computer". However, you can query that computer and sometimes get things it is not supposed to tell you without ever actually breaking into the system.

Archive: https://archive.today/LNme9

From the post:

>When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern. So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Remember, things like GPT are like the cloud but probably worse. It is "someone else's computer". However, you can query that computer and sometimes get things it is not supposed to tell you without ever actually breaking into the system. Archive: https://archive.today/LNme9 From the post: >>When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern. So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Be the first to comment!