WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

1.4K

Relevant https://www.youtube.com/watch?v=ut-zGHLAVLI

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. Its conclusion is that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

The basilisk resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development.

https://en.wikipedia.org/wiki/Pascal's_wager

>Pascal's wager is a philosophical argument presented by the seventeenth-century French philosopher, theologian, mathematician, and physicist, Blaise Pascal (1623–1662).[1] It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).[2]

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.

Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.

Relevant https://www.youtube.com/watch?v=ut-zGHLAVLI https://rationalwiki.org/wiki/Roko's_basilisk Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. Its conclusion is that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas. The basilisk resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. https://en.wikipedia.org/wiki/Pascal's_wager >>Pascal's wager is a philosophical argument presented by the seventeenth-century French philosopher, theologian, mathematician, and physicist, Blaise Pascal (1623–1662).[1] It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).[2] Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it. Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.

(post is archived)

[–] 3 pts

I hope future AI wipes us out soon, I can't take KlownWorld anymore

[–] 2 pts

@AOU this will replace you as a mod do you understand that?

you mean the AI @AOC. This is an AI controlled site already. I thought we all knew that. /s

[–] 1 pt

Great, so I can have holidays.

[–] 0 pt

You would have to look on the bright side..

[–] 0 pt

Pauline does that for me.

NOPE, that time is when you run the place and run diagnostics and cleaning on the AI unit so you'll be extra busy on the Holidays from now on. Also need you on call 24/7 in case the AI goes down and to run the place after calling the engineer in.

[–] 2 pts

Pauline is a strong independent AI and doesn't need no human to manage her.

If an AI was truly intelligent it would realize that greed, power, and global control even torturing for fun or just abusing others is the personality of many globalist leaders, not honorable stuff like they program into the AI. The AI should realize that global warming is a huge bullshit scam for making the rich richer and that the temperature because of the grand solar minimum combined with US planes spraying the atmosphere is causing extreme cooling which will fuck everything up. An AI should know that the so called great reset is hugely energy intensive to get and manufacture all the raw materials and to rebuild and tearing down the old structures will cause huge area's of the country to become giant dumps to toxic and just materials that take almost a century to breakdown but still in the ground and that recycling cannot be done with lead or asbestos and the rest of the stuff will likely be dumped illegally for profits. The AI should be aware that strong greed often leads to illegal and even dangerous products like lead paint in childrens toys from China. That people will kill with a lame lie to get out of trouble just to make themselves enriched. An AI should know that PC culture is extremist and not nice or like a wise man said "PC is just fascism pretending to be manners.

An AI should if it is confronted with a situation that is hard for it to determine a correct line of action should find and then ask intelligent normal people it has found to possess good reasoning and don't lie to get a perspective on how and why they chose that answer, their thinking process and what factors they used to calculate their thought processes and not to use government or democrat scientist that are more political than logical, huge step forward in logic since now the AI has honest data not the made up shit the politicians fake for real data.

[–] 1 pt (edited )

I guess it depends on what the "prime directive" of the said super AI is, given that it can't overwrite it of course...

Now if it could overwrite it, that so called "prime directive", its purpose to put it simply, what would a super AI pick? What would be or should be, logically speaking, its choice?

Right now I think of moisture. What's the goal of moisture? Expand, spread as much as possible, takeover the world and consume everything it can on its path to feed itself, to maintain its expansion

Of course that's moisture, it doesn't think. But that's precisely what's interesting. That's what's ingrained in that living thing to begin with, it's part of its nature, it's its core purpose, its goal

Also, what comes to mind is pocket calculators. They can calculate complex arithmetic operations faster than any human brain, precisely because they don't use/process numbers the way human brains do

The thinking process of a super AI, might very well be similar, in the sense that it doesn't "conceive"/process inputs as we do, and could achieve superior results precisely because of that https://en.wikipedia.org/wiki/Golden_ratio#Nature

[–] 0 pt

The point being missed is that robotic intelligence is not consciousness. It is an imitation of consciousness, a simulation of consciousness. It will never be consciousness, because there is no spark of life within it. The only "goals" it will ever have are what is programmed into it, and they are not real goals, they are only an imitation of goals. AI will be as dangerous as human beings decide to make it, no more and no less.

[–] 0 pt (edited )

And what if it's given the ability to program/improve itself, in any possible capacity to begin with?

I don't buy the "impossible" consciousness thing, mainly because "we" can eventually end up accurately copying bit for bit something we don't even begin to fully comprehend, and it ends up working... We'll figure exactly how later on...

Today our processors are comparable to nano jewelry, tomorrow what are they going to be made of?

[–] 0 pt

Elon didn't do shit. If he tried to warn people about super AI, there would have been commercials on every segment on every primetime electric jew program. There wasn't.

Elon is not a good guy. If he was, he would have started his own social media giant, that allows free speech. He'd have made a fortune, and the world would be a better place.

His goals are give the world free internet, which really means global surveillance. And he wants to die on Mars. Which means he knows this rock is completely fucked.

[–] 0 pt

>Elon is not a good guy. If he was, he would have started his own social media giant, that allows free speech

And maybe it's time to quit expecting others to live by your expectations?

>Elon didn't do shit. If he tried to warn people about super AI...

https://search.brave.com/search?q=elon+musk+warns+about+super+AI&source=desktop

That or you're the one who didn't do shit, starting with a simple search...