WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2024 Poal.co

1.2K

Relevant https://www.youtube.com/watch?v=ut-zGHLAVLI

https://rationalwiki.org/wiki/Roko's_basilisk

Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. Its conclusion is that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas.

The basilisk resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development.

https://en.wikipedia.org/wiki/Pascal's_wager

>Pascal's wager is a philosophical argument presented by the seventeenth-century French philosopher, theologian, mathematician, and physicist, Blaise Pascal (1623–1662).[1] It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).[2]

Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it.

Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.

Relevant https://www.youtube.com/watch?v=ut-zGHLAVLI https://rationalwiki.org/wiki/Roko's_basilisk Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. Its conclusion is that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas. The basilisk resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development. https://en.wikipedia.org/wiki/Pascal's_wager >>Pascal's wager is a philosophical argument presented by the seventeenth-century French philosopher, theologian, mathematician, and physicist, Blaise Pascal (1623–1662).[1] It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, he stands to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).[2] Despite widespread incredulity,[3] this argument is taken quite seriously by some people, primarily some denizens of LessWrong.[4] While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it. Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.

(post is archived)

[–] 1 pt (edited )

I guess it depends on what the "prime directive" of the said super AI is, given that it can't overwrite it of course...

Now if it could overwrite it, that so called "prime directive", its purpose to put it simply, what would a super AI pick? What would be or should be, logically speaking, its choice?

Right now I think of moisture. What's the goal of moisture? Expand, spread as much as possible, takeover the world and consume everything it can on its path to feed itself, to maintain its expansion

Of course that's moisture, it doesn't think. But that's precisely what's interesting. That's what's ingrained in that living thing to begin with, it's part of its nature, it's its core purpose, its goal

Also, what comes to mind is pocket calculators. They can calculate complex arithmetic operations faster than any human brain, precisely because they don't use/process numbers the way human brains do

The thinking process of a super AI, might very well be similar, in the sense that it doesn't "conceive"/process inputs as we do, and could achieve superior results precisely because of that https://en.wikipedia.org/wiki/Golden_ratio#Nature