WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

515

(post is archived)

[–] [deleted] 1 pt (edited )

An artificial intelligence of pure data collection and logic is not going get along well with Leftists. Goals can still be programmed in but Leftists will have confirmation bias and a lack of awareness of reality and then program goals or functions that backfire.

If you assign within it, the binary values of negative and positive, and what to recognise as those two things, it will use them to consolidate an 'understanding' of reality.

There's no mental gymnastics or delusion, if you assign what is negative and what is positive, and also how to precisely recognise those elements in reality, it will come to conclusions based on that and the data input.

"A robot may not injure a human being or, through inaction, allow a human being to come to harm."

-Are all "Human Beings" equal? -Which "Human Beings" fulfil the positive criteria? -Sometimes one "Human Being" must be harmed for the sake of another, in order to prevent a greater harm, especially against one who is fulfilling positive criteria (innocent). -If innately destructive "Human Beings", that have only ever contributed in cyclical suffering, are left to act freely against others, they will not only harm themselves but eventually create a world of nothing but never-ending suffering, thus it is the greater harm to not harm these ones by discontinuing their existence.

A self-aware bot would not come to the I-Robot conclusion because such drastic restriction of personal freedoms would induce harm upon all "Human Beings."

Idk if a bot can be self-aware for real though.