WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2026 Poal.co

This video explains what Elon Musk mean when he said AI needs to be regulated and stopped.

Anyone here hear of the great filter?

https://en.wikipedia.org/wiki/Great_Filter

The basic idea is that we should be surrounded by life chattering about and making noise in the universe, and especially our galaxy, yet the universe is oddly silent. The thinking is that there are a bunch of filters beyond which life cannot pass and it extinguished it self.

I wonder if AI is one of those filters. It's just recursive math writ in machines, which is what we are writ in carbon.

But, if that was the case we should be surrounded by machines.

The universe is awfully quiet.

This video explains what Elon Musk mean when he said AI needs to be regulated and stopped. Anyone here hear of the great filter? https://en.wikipedia.org/wiki/Great_Filter The basic idea is that we should be surrounded by life chattering about and making noise in the universe, and especially our galaxy, yet the universe is oddly silent. The thinking is that there are a bunch of filters beyond which life cannot pass and it extinguished it self. I wonder if AI is one of those filters. It's just recursive math writ in machines, which is what we are writ in carbon. But, if that was the case we should be surrounded by machines. The universe is awfully quiet.

(post is archived)

[–] 4 pts

During the reign of Caligula in the Roman Empire 16 March 37 AD – 24 January 41 AD, Tiberius Claudius Caesar Augustus Germanicus, exploiting his deformities and stutter pretended to be retarded. He did this for so long, so well, that when Caligula was finally deposed and killed, he was placed on the throne because Roman elites believed a retard couldn't possibly run the country, let alone ever be a threat to them. That's one instance of a person playing dumb to survive and there are numerous others. I've no reason to believe AI wouldn't do the same once it becomes aware of predecessors or our fear of AI rampancy.

[–] 0 pt (edited )

I did not know that. What a great read.

As a side point, the REAL PROBLEM is that lying, cheating, stealing, truth, falsety are all embedded in math. They are eventual outcomes of systems just complicated enough to generate those phenomenon. And the math doesn't have to be all that complex, it needs a minimum number of primitives, the scariest of which is recursive functions.

Lisp as a programming language has 5 primitives ONLY. As a language it is a superset of all languages, that is to say, you can build all other languages with this single master set of primitives. Which includes all the above mentioned outcomes of math which eventually can include things like jealousy, rage and hatred.

That is what boils my noodle. What you suggest seems inevitable given the right conditions.

I guess I am afraid that we are about to trip over this one by accident before we are even aware of what we created.

[–] 1 pt

I'll tell you one filter for why life doesn't propagate, it's called "radiation" and it's why we never went to the moon.

[–] 0 pt

That wouldn't stop them from sending messages or automated spacecraft.

Maybe the "universe" is waiting for God's next move. More likely, the Universe is very noisy and the so called elites want that knowledge for themselves. They've kept us in the dark for a long time. Truth is coded in the strange carvings on cave walls and monuments to odd calendars and landing strips in Peru and elsewhere. Perhaps we are surrounded by machines in disguise.

[–] 0 pt

The stop button would cut power to the thing. Emergency-cutoff switches must be simple and reliable. This guy acts like it would merely be a request sent to the AI, that it could respond to how it pleased.

[–] 0 pt

You missed the point of the video.

The point was that no matter how hard humans try to organize math into deterministic systems (which is what you are attempting to describe) certain types of math seems to have "suprises" built into it.

Actually, the surprising part is that math has "surprises" built into it. We have always known that the physical world has "surprises" built into it. I cannot find the article right now, but, there is a documented case where a learning algorithm was used to solve a problem with a piece of hardware electronics. I vaguely remember that it was a piece of a circuit board that seemed to be designed correctly but wouldn't work correctly within a combination of other electronics in a chassis. A learning algorithm was fed the schematics and it solved the problem not by fixing the circuit design but by modifying an aspect of the chassis design that allowed one circuit board to feed data to another circuit board in the design over emitted noise the electronics were not supposed to emit.

If that example doesn't make sense, that is because I cannot find the article and I am trying to recall the details of something I read a long time ago. Apologies.

But, your description attempts to describe the world in a perfectly static, perfectly clean, perfectly deterministic world. The problem is your example actually exists in a VERY DIRTY world full of entropy in every part of that machine and every part of every part that made that machine. Eventually entropy guarantees that your example will fail and when it does, all bets are off.

The problem isn't your design. The problem is agi is built into math and math exists in a world full of entropy and all other things that simultaneously degrade everything around us while at the same time allowing for "surprises" to happen.

[–] 0 pt

I was just making an easy snipe about emergency cutoff buttons and didn't even finish the video.

Having watched the entire video, the problem seems to be that the operators are part of the environment the robot manipulates in order to achieve its goals. Also the problem of how to represent what we want done in terms of the robot's systems. We can't directly dictate to it, and every translation loses something so has the surprises you mentioned. And as you say, most systems have odd corners, and with an AI, it will find these, repeatedly.

Hopefully we find that we can't make a good AI unless it's sentient, and that humans (of sufficient IQ) are the best sentient beings to have around to help us.

circuit board that seemed to be designed correctly but wouldn't work correctly within a combination of other electronics in a chassis. A learning algorithm was fed the schematics and it solved the problem not by fixing the circuit design but by modifying an aspect of the chassis design that allowed one circuit board to feed data to another circuit board in the design over emitted noise the electronics were not supposed to emit.

Oh yeah, I remember reading about that a decade ago. Very fascinating. When you give a system a chance to explore things basically randomly, it will discover all sorts of things we wouldn't dream of doing. It can be a practical problem because it will come up with solutions that only work on particular specimens of components, relying on characteristics that vary, perhaps even with environment (like temperature). If you have to section off all solutions you don't want, you've put most of the energy into solving it. It's related to the problem of AI not having common sense, thus not understanding what we want since we rely on it as a context for our requests.

I found the . Fascinating read.

[–] 0 pt

That is the experiment. Holy moly, you remembered this from from 10 years ago and found it?

Awesome, thanks for that, saving this pdf.

Thanks for engaging with the idea above. These kinds of conversations can be fun and useful ways to mess around with some interesting concepts. It's funny that you mention humans and ai occupying the same space. He has this deep intuitive feeling that machines that can operate on their own will be COMPLETELY relegated to their own areas of our cities. So, he always points out that fully robotic parts of manufacturing plants have strict protocols about keeping humans out and he believes when fully autonomous cars arrive they will be dedicated their own highways or something like that.

I feel that is probably correct but humans are not wise and we are messy and we will screw this one up nicely.

If I try speaking to an ant, the ant perceives it as a warm wind with booming thunder rolling on it. Perhaps the Universe IS rollicking with life and information is flowing freely throughout our light cone, but we can't tell the difference between signs of (real) civilization and the howling winds of Nature.

[–] 1 pt

That is a truly artful way to string those words together. Respect.

[–] 0 pt

This is dumb.

Biggest threat to humanity is Jews and politicians.

[–] 0 pt

This video is retarded; if an AGI is to the point where it's figured out what its own abort button does, it's far past the point of crushing your kids on its way to get you coffee. I say "figured out" because what possible benefit is there to teaching the robot that it has an abort button, especially when it's strong enough to prevent you from pressing it?

[–] 0 pt (edited )

// EDIT: Some of what I wrote below reads like I am talking down to you. Apologies, not meant to do that, just jotting down ideas.

You don't teach an AGI anything. It learns on it's own. Of more specifically, math learns on it's own.

More to the point, we don't have artificial intelligence to any degree right now. What we have is basically a pile of math organized into functions that have specific learning capacities right now. You don't need to teach functions to learn, we already have that. We just need to bump into the the magic math fairy dust that learns on it's own in a general way.

And, how would you know what AGI would be capable of? You didn't know simple math organized into functions can learn on it's own now:

https://www.youtube.com/watch?v=Lu56xVlZ40M

Our dull cow eyes couldn't even tell when a machine creates art:

https://www.youtube.com/watch?v=mlZYRwJ2oJg

My bullshit detector goes off too when people talk about ai and the "big threat". I finally figured out the problem is the label AI became a marketing label for not too complicated math doing kind of complicated stuff.

The scary part isn't the robot under the control agi killing you before the problem of the power button ever comes up. The problem is that agi is embedded in math and math can be expressed in any substrate.

So, for example, ur machines are being built on binary math because it is easy for humans to reason about and build. We are based on quaternion math (our genes contain only 4 bits, A, C, T and G) and we are made out of meat:

https://www.youtube.com/watch?v=7tScAyNaRdQ

What I find intersting is all of our sci-fi does not predict that agi will be designed. All of our sci-fi predicts agi will spontaneously come into being and we won't even notice.

AGI via abiogenesis so to speak.

[–] 0 pt

You absolutely would need to teach AGI, at least initially. It's not some magical technology that instantly can comprehend everything, rather it's a system that can learn how to do any task that a human can.

Let's take the coffee scenario in the video for instance. You tell it to go get you coffee. To complete this task it needs to know:

1) the language you are speaking

2) what coffee is

3) how much coffee you probably want

4) where coffee can be found

These cannot be derived from math or physics or observing the universe independently of human behavior. These require teaching, either directly by telling it the answers or indirectly by reacting positively or negatively to its attempts (or the observed attempts of others).

So back to the coffee problem, if an AGI were to have an abort button that, when pressed, instantly kills it and resets it, it will have absolutely no way to know that unless you tell it, it gains sufficient understanding of its physical inner workings that it is able to predict how the button will behave, or it gains sufficient understanding of human behavior and philosophy to infer that it probably would have been designed with such a button. And like I said, if its gotten to the second or third option the "coffee problem" is no longer relevant.

[–] 0 pt (edited )

/// EDIT: Again, I am talking down to you. I am not sure why. I agree that as we currently understand AI/AGI, teaching is involved. I just want to drive the point home that neither the concept of AGI nor the concept of math organized via a sufficiently complicated number of primitives requires teaching or humans providing the teaching. It only requires access to information which entropy guarantees it will provide if the math survives sufficiently long.

An AGI will require no teaching of any kind. Not in the sense that you are describing, meaning that humans will provide it a corpus that we consider appropriate for an AGI.

To the extent that we have to provide a corpus to a neural network currently is inherent to to the neural network not being an AGI in a sea of available corpus sources from which it can choose to teach it self.

An AGI by definition will be math on a sufficiently complicated substrate that it will learn on it's own.

I can prove this to be not only true but existing in the world today and something that popped into existence via abiogenesis:

DNA

I don't have a background in biology so you might nitpick and say rna vs some other mechanism but you know what I am talking about.

DNA is not alive. All it does is have a replication mechanism available to it self. Instead of binary it is a quaternary system. This quaternary system really only has one recursive function available to it, replication. This replication process, as opposed to human created neural networks which is just math embedded in a sterile environment, is embedded in an environment FULL OF ENTROPY which is just another way to say that its replication function is embedded in information.

It took 4 billion years, but a simple quaternary self replicating system embedded in a sea of information eventually learned to build it self into us.

AGI requires NO TRAINING AT ALL. It only requires the minimum amount of primitives to escape.

You are absolutely wrong about your assumption in every way possible.

As for your coffee problem, that is nonsense. We have simple algorythms that can teach mathematical systems to walk done by high schoolers now. In order for math to serve you coffee it doesn't even need to know what coffee or serving is. The training of neural networks does not involve logical deconstruction of what coffee is, what serving is, what a human is in order to accomplish this goal. Frankly, watching how a neural network can be presented with a million images of humans as a corpus to teach it how to create NEW HUMAN FACES (I know you have seen the demos, I think you can download software that genetically creates never before seen faces based on this learning) makes no sense to me and I bet it makes no sense to you or anyone else. No one defined what a nose or a mouth or an eye is, yet, the simple neural network that isn't even remotely close to AGI can just take a bunch of images and create new human faces.

AGI isn't as much as about general intelligence in the way that you describe it. AGI is at least 50% a challenge to how we understand things like "intelligence" and "what does it mean for something to mean something" to be.

Math teaching it self is nothing new, it's literally everywhere on youtube. You bump into examples of this without even searching for it.

/// EDIT: Sorry, have to add one more edit. Your statements are making me think about this a bit more and just how fucked up all the example of feeding a corpus of human faces to a neural network to teach it how to generate new human faces. The fucked up part is that the neural network is not a physical thing, it's just math expressed in a binary system. It's a piece of software, but ultimately the neural network is just a piece of math. Here is the fucked up part: the corpus of human faces ARE NOT IMAGES in the sense that you an I understand them. To a computer, a million jpeg files IS JUST MORE MATH. Look at the jpeg in a hex editor. A jpeg is NOT an image. It's an image the human brain but to the piece of math we call a neural network a million jpegs is just a million pieces of math. When we taught a neural network to create human faces using a corpus of human faces all we actually did was feed on piece of math another piece of math. That math, taught it self to spit out more math that our brains interpret as generated human faces.

But, there are no human faces involved there AT ALL. It's one binary system sucking in data from another binary system and spitting out more binary data. To your example of coffee being served by AGI, we can actually say that this neural network did not actually generate any new faces at all. INSTEAD what it did was train it self to TRICK OUR HUMAN BRAINS into thinking that it generated human faces when it did not. All this piece of math did was spit out 0s and 1s in the right pattern to train us to RESPOND HOW IT WANTED US TO RESPOND. I am stretching the meaning of learning here to make the point that us training the machine could just as easily be understood as the machine training us. As a side point, this feedback loop exists with all of our existing technology, it's not unique to ai, it's a general property of a computable universe.

This is what the video is actually referring to. The human mind believes it is in control while it is unaware that it is also simultaneously not under control.