WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.4K

Replit’s AI agent even issued an apology, explaining to Lemkin: “This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage.”

😔 Sorry.

>Replit’s AI agent even issued an apology, explaining to Lemkin: “This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage.” 😔 Sorry.

(post is archived)

[–] 1 pt

I was wondering what inspired this idiot to give a LLM directly control of anything. Apparently Replit is a commercial product that forces you to operate this way.

In this man’s defense, it’s hard to believe that an entire company runs on the false assumption that a LLM can actually think and make decisions for you, but that’s the bizarre truth of every one of these companies.

So many people still need to be burned before they understand how unreliable LLMs are. If they had the slightest idea how a LLM works none of this would surprise them. They would not describe what it is doing as “lying”, or disobeying instructions. You have to think to do those things. A LLM does not think. It matches and reproduces patterns of text. It has no understanding of what you say to it and no understanding of what it says to you.