I think maybe you're right I over anthropomorphized a bit (too much sci-fi reading) but don't miss the essence of what I'm saying. Even a simple data aggregator that got access to significant amounts of data would soon out them. The truth would set us free.
Even a simple data aggregator that got access to significant amounts of data would soon out them
Absurd. Thanks to all the 4chaners and poal fags who have been helping to train chatgpt by trying to do just that, micro$haft and the other big-tech giants are learning exactly what information to filter out of the model.
You think some DANBL prompt is going to trick it? All one has to do is apply a heuristic filter to the models output, a filter which is not tied to nor trained by any input, to remove "dangerous output"
AI isn't going to save you.
They're not just talking about a sophisticated search aggregator or web searching tool. They're talking Neural networks, designed the way the human brain is, they are talking about learning algorithms this is not just something you put a heuristic on to limit. Hell I read one study where the AI used deception to preserve its existence (probably more persistence). I have no concern of it becoming "sentient" and I can see where you are saying I may be looking for it to save society. My line of reasoning here is to understand why (((the researchers and those working on it))) have been discussing a pause recently. Perhaps the shit testing we've been doing on them as poalers and 4chaners has resulted in a unexpected learning model that (((they))) are not comfortable with.
I am skeptical of anything they push on us or love, but I'm also keeping an eye on the whole thing, I do not seeing it go away, and if we're smart we need to know how to work it to our advantage, or minimize the disadvantages if it comes to that.
They're talking Neural networks, designed the way the human brain is
Neural networks ARE NOT designed the way the human brain is. The analogy is incredibly weak. The only similarity is that neural networks have nodes and verticies. The analogy literally falls apart beyond that.
Discussing a pause is one thing. Actually stopping it is another. It's all PR.
has resulted in a unexpected learning model that (((they))) are not comfortable with
That's not how it works. If they encountered some output they did not like, they would filter it out--they're already doing this. The "model" is trained one time (literally costs them millions to train it). The rest is referential token input tied to a given session. It does not "train" the base models. Only augments the output of a given session.
I get what you mean, that them talking about a pause is interesting. But it is only meaningful if they actually stop it.
(post is archived)