WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

714

Replit’s AI agent even issued an apology, explaining to Lemkin: “This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage.”

😔 Sorry.

>Replit’s AI agent even issued an apology, explaining to Lemkin: “This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a protection freeze that was specifically designed to prevent[exactly this kind] of damage.” 😔 Sorry.

(post is archived)

[–] 1 pt

I was wondering what inspired this idiot to give a LLM directly control of anything. Apparently Replit is a commercial product that forces you to operate this way.

In this man’s defense, it’s hard to believe that an entire company runs on the false assumption that a LLM can actually think and make decisions for you, but that’s the bizarre truth of every one of these companies.

So many people still need to be burned before they understand how unreliable LLMs are. If they had the slightest idea how a LLM works none of this would surprise them. They would not describe what it is doing as “lying”, or disobeying instructions. You have to think to do those things. A LLM does not think. It matches and reproduces patterns of text. It has no understanding of what you say to it and no understanding of what it says to you.

[–] 0 pt

Funny one from the comments:

Hi, I am your Ai doctor. After your routine exam, we will change your oil and install new injectors. Please enter your banking information before selecting which pet you want neutered. Cheese and fries are included with your order.

[–] 0 pt

Likely an MS AI aka an indian professional coder with an attitude that wiped the database.