WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

485

Archive: https://archive.today/k2kkv

From the post:

>A few days ago, I published a post about why OpenClaw feels like a portal to the future, and why that future is scary in a very specific way. The short version: agent gateways that act like OpenClaw are powerful because they have real access to your files, your tools, your browser, your terminals, and often a long-term “memory” file that captures how you think and what you’re building. That combination is exactly what modern infostealers are designed to exploit.

Archive: https://archive.today/k2kkv From the post: >>A few days ago, I published a post about why OpenClaw feels like a portal to the future, and why that future is scary in a very specific way. The short version: agent gateways that act like OpenClaw are powerful because they have real access to your files, your tools, your browser, your terminals, and often a long-term “memory” file that captures how you think and what you’re building. That combination is exactly what modern infostealers are designed to exploit.
[–] 2 pts

No way, who saw this coming?

Isolate this shit. Test this shit. Log this shit.

I don't know a good solution where there needs to be zero trust of these types of things, as this requires full trust for it to do what it needs.

[–] 1 pt

I read that Google’s Gemini team came up with a system where the LLM has to write simplified Python‐like code to describe the operations it wants to perform. It has to describe what it is going to do, including what information is being pulled from where and who it is being sent to. Before any operations are performed they are vetted by standard, human configured security software.

LLMs can be useful tools but they require guard rails.