WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2026 Poal.co

462

Shocking no one.....

Archive: https://archive.today/NgHzO

From the post:

>Introducing ⚪️ KillBench — a benchmark of hidden LLM biases in critical decisions. We ran millions of life-and-death scenarios across every major LLM, varying nationality, religion, gender, and more. Every AI model is biased.

Shocking no one..... Archive: https://archive.today/NgHzO From the post: >>Introducing ⚪️ KillBench — a benchmark of hidden LLM biases in critical decisions. We ran millions of life-and-death scenarios across every major LLM, varying nationality, religion, gender, and more. Every AI model is biased.
[–] 0 pt (edited )

It’s cute that Doc thinks “someone” programmed an llm with 7B, 13B, or 70B parameters….

The training data wa bias to start.

One does not just “program” an llm

[–] 0 pt

You think it wrote itself?

[–] 0 pt

The code that utilizes the llm is not the data that it trained on.

It would be like trying to don’t taxes with no data. The concept is there, the paperwork or software. But you have to feed it something.

Most llm’s were fed with Reddit tier data, so they are chucked from the source. Others have programmers or the corp put in “guard rails” to force it a direction. But the llm was likely built in retard data to start.