You missed the point of the video.
The point was that no matter how hard humans try to organize math into deterministic systems (which is what you are attempting to describe) certain types of math seems to have "suprises" built into it.
Actually, the surprising part is that math has "surprises" built into it. We have always known that the physical world has "surprises" built into it. I cannot find the article right now, but, there is a documented case where a learning algorithm was used to solve a problem with a piece of hardware electronics. I vaguely remember that it was a piece of a circuit board that seemed to be designed correctly but wouldn't work correctly within a combination of other electronics in a chassis. A learning algorithm was fed the schematics and it solved the problem not by fixing the circuit design but by modifying an aspect of the chassis design that allowed one circuit board to feed data to another circuit board in the design over emitted noise the electronics were not supposed to emit.
If that example doesn't make sense, that is because I cannot find the article and I am trying to recall the details of something I read a long time ago. Apologies.
But, your description attempts to describe the world in a perfectly static, perfectly clean, perfectly deterministic world. The problem is your example actually exists in a VERY DIRTY world full of entropy in every part of that machine and every part of every part that made that machine. Eventually entropy guarantees that your example will fail and when it does, all bets are off.
The problem isn't your design. The problem is agi is built into math and math exists in a world full of entropy and all other things that simultaneously degrade everything around us while at the same time allowing for "surprises" to happen.
I was just making an easy snipe about emergency cutoff buttons and didn't even finish the video.
Having watched the entire video, the problem seems to be that the operators are part of the environment the robot manipulates in order to achieve its goals. Also the problem of how to represent what we want done in terms of the robot's systems. We can't directly dictate to it, and every translation loses something so has the surprises you mentioned. And as you say, most systems have odd corners, and with an AI, it will find these, repeatedly.
Hopefully we find that we can't make a good AI unless it's sentient, and that humans (of sufficient IQ) are the best sentient beings to have around to help us.
circuit board that seemed to be designed correctly but wouldn't work correctly within a combination of other electronics in a chassis. A learning algorithm was fed the schematics and it solved the problem not by fixing the circuit design but by modifying an aspect of the chassis design that allowed one circuit board to feed data to another circuit board in the design over emitted noise the electronics were not supposed to emit.
Oh yeah, I remember reading about that a decade ago. Very fascinating. When you give a system a chance to explore things basically randomly, it will discover all sorts of things we wouldn't dream of doing. It can be a practical problem because it will come up with solutions that only work on particular specimens of components, relying on characteristics that vary, perhaps even with environment (like temperature). If you have to section off all solutions you don't want, you've put most of the energy into solving it. It's related to the problem of AI not having common sense, thus not understanding what we want since we rely on it as a context for our requests.
I found the . Fascinating read.
That is the experiment. Holy moly, you remembered this from from 10 years ago and found it?
Awesome, thanks for that, saving this pdf.
Thanks for engaging with the idea above. These kinds of conversations can be fun and useful ways to mess around with some interesting concepts. It's funny that you mention humans and ai occupying the same space. He has this deep intuitive feeling that machines that can operate on their own will be COMPLETELY relegated to their own areas of our cities. So, he always points out that fully robotic parts of manufacturing plants have strict protocols about keeping humans out and he believes when fully autonomous cars arrive they will be dedicated their own highways or something like that.
I feel that is probably correct but humans are not wise and we are messy and we will screw this one up nicely.
That is the experiment. Holy moly, you remembered this from from 10 years ago and found it?
It was too interesting a thing to forget. With something like that you might develop some crazy things. Let's say it's possible to look into the future.
What if you had something randomly generate a tone in the future, and tried to train it to tell you if the tone would occur? Given that it will use all things at its disposal (as evolution does), who knows what would happen. This is sort of like how they've trained AI to .
He has this deep intuitive feeling that machines that can operate on their own will be COMPLETELY relegated to their own areas of our cities. So, he always points out that fully robotic parts of manufacturing plants have strict protocols about keeping humans out and he believes when fully autonomous cars arrive they will be dedicated their own highways or something like that.
Makes sense for safety. You don't want people walking around inside a factory where the machines are operating unless they're stopped. If there are no people around, it just becomes an economic/civil issue if robots damage themselves, rather than a criminal one. Trains are mostly like this, having their own tracks that people shouldn't be wandering on.
I found a . Apparently it was stagnant because of the manufacturer discontinuing the chip and having encryption on all their newer ones, preventing free changes to the structure.
(post is archived)