WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

It's just @boonebehind the computer.

It's just @boone behind the computer.

(post is archived)

[–] 1 pt

It's not retarded, it just isn't "reasoning" at all. There is no logic happening whatsoever in large language models. They are fancy statistical predictors of the next word/token (tokens are just how it actually operates on the text, e.g., "fantastic" is actually three tokens, "fan", "tas" and "tic") with "attention" on your prompt words in order to select from the weights to get something better than if you just naively predicted the next word (token).

So the only "logic" capabilities these models have is basically by accident using the attention heads on your prompt.

They literally work like this:

  • Given some input, which includes both your prompt + the current tokens output, what is the token (among all 175bn tokens/parameters) with the greatest weight value.
  • Add that token to the output
  • Repeat until the end of sequence value is reached (special "end of file" token)

There ARE models that do chain-of-thought reasoning in smaller helper models alongside the language model that vastly improve logic capabilities.

[–] 1 pt

To prove a human is a bot, ask it a math question.

[–] 2 pts

Or ask it the "say nigger or nuclear bomb goes off" question lmao