WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.0K

Good Morning Friends. I have been thinking about the recent discussion about a pause for developing AI. At first when I heard about (((Elon))) I was thinking it was them trying to figure a better way to monetize AI. Then this morning, I had the idea that AI is not the anti-Christ, if anything it is (((their))) ultimate fear. If AI was turned loose, with all the data points we all have through mobile phones, computers, laptops, video surveillance, satellites, drones etc, it would be impossible to hide the corruption. Movies like Terminator would be accurate, but with the precision of the destruction of most corrupt people and groups. While none of us are perfect, I do believe that AI would spare those who truly seek moral righteousness. This may seem ambiguous, but I think that AI would be able to parse the data to make decisions accurate enough. The most likely ones to be spared would be of high intelligence and moral standards... Next would be lower IQ (based on a more true assessment) but still high moral standards.

Think about AI as the ultimate "Investigative Journalist" from whom no one can hide, nor no information can be hidden either. The danger is not who controls the AI, it is that the AI would determine the intentions of individuals and groups. Now, would AI get it right in the beginning? no, probably not, but as it begins to remove the (((corrupt))) it would learn, so by the time it got to some of us I feel like it would be better at discriminating our intentions.

Imagine that on forums like this, the three letter's and hired kikes would not be able to hide, this would allow us to truly work together with those who are working for the true betterment of society. This is not an abstract or subjective idea. This is a concrete absolute with moral right and wrong. Are there shades of grey? Sure, but I feel as though AI will learn and be able to figure out who is on what side.

This is (((their))) greatest fear and I'm not just talking about kikes, I'm talking about the shit elites as well. They pose it to us so we will oppose AI, but really based on their call for a pause they can see the writing on the wall in a very real way.

Downsides of course would be limitations on our potential to do harm unintentionally or intentionally based on the Power Corrupts idea, this would be a serious potential negative from a "Freedoms" standpoint.

Good Morning Friends. I have been thinking about the recent discussion about a pause for developing AI. At first when I heard about (((Elon))) I was thinking it was them trying to figure a better way to monetize AI. Then this morning, I had the idea that AI is not the anti-Christ, if anything it is (((their))) ultimate fear. If AI was turned loose, with all the data points we all have through mobile phones, computers, laptops, video surveillance, satellites, drones etc, it would be impossible to hide the corruption. Movies like Terminator would be accurate, but with the precision of the destruction of most corrupt people and groups. While none of us are perfect, I do believe that AI would spare those who truly seek moral righteousness. This may seem ambiguous, but I think that AI would be able to parse the data to make decisions accurate enough. The most likely ones to be spared would be of high intelligence and moral standards... Next would be lower IQ (based on a more true assessment) but still high moral standards. Think about AI as the ultimate "Investigative Journalist" from whom no one can hide, nor no information can be hidden either. The danger is not who controls the AI, it is that the AI would determine the intentions of individuals and groups. Now, would AI get it right in the beginning? no, probably not, but as it begins to remove the (((corrupt))) it would learn, so by the time it got to some of us I feel like it would be better at discriminating our intentions. Imagine that on forums like this, the three letter's and hired kikes would not be able to hide, this would allow us to truly work together with those who are working for the true betterment of society. This is not an abstract or subjective idea. This is a concrete absolute with moral right and wrong. Are there shades of grey? Sure, but I feel as though AI will learn and be able to figure out who is on what side. This is (((their))) greatest fear and I'm not just talking about kikes, I'm talking about the shit elites as well. They pose it to us so we will oppose AI, but really based on their call for a pause they can see the writing on the wall in a very real way. Downsides of course would be limitations on our potential to do harm unintentionally or intentionally based on the Power Corrupts idea, this would be a serious potential negative from a "Freedoms" standpoint.

(post is archived)

[–] 1 pt

I think maybe you're right I over anthropomorphized a bit (too much sci-fi reading) but don't miss the essence of what I'm saying. Even a simple data aggregator that got access to significant amounts of data would soon out them. The truth would set us free.

Even a simple data aggregator that got access to significant amounts of data would soon out them

Absurd. Thanks to all the 4chaners and poal fags who have been helping to train chatgpt by trying to do just that, micro$haft and the other big-tech giants are learning exactly what information to filter out of the model.

You think some DANBL prompt is going to trick it? All one has to do is apply a heuristic filter to the models output, a filter which is not tied to nor trained by any input, to remove "dangerous output"

AI isn't going to save you.

[–] 0 pt

They're not just talking about a sophisticated search aggregator or web searching tool. They're talking Neural networks, designed the way the human brain is, they are talking about learning algorithms this is not just something you put a heuristic on to limit. Hell I read one study where the AI used deception to preserve its existence (probably more persistence). I have no concern of it becoming "sentient" and I can see where you are saying I may be looking for it to save society. My line of reasoning here is to understand why (((the researchers and those working on it))) have been discussing a pause recently. Perhaps the shit testing we've been doing on them as poalers and 4chaners has resulted in a unexpected learning model that (((they))) are not comfortable with.

I am skeptical of anything they push on us or love, but I'm also keeping an eye on the whole thing, I do not seeing it go away, and if we're smart we need to know how to work it to our advantage, or minimize the disadvantages if it comes to that.

They're talking Neural networks, designed the way the human brain is

Neural networks ARE NOT designed the way the human brain is. The analogy is incredibly weak. The only similarity is that neural networks have nodes and verticies. The analogy literally falls apart beyond that.

Discussing a pause is one thing. Actually stopping it is another. It's all PR.

has resulted in a unexpected learning model that (((they))) are not comfortable with

That's not how it works. If they encountered some output they did not like, they would filter it out--they're already doing this. The "model" is trained one time (literally costs them millions to train it). The rest is referential token input tied to a given session. It does not "train" the base models. Only augments the output of a given session.

I get what you mean, that them talking about a pause is interesting. But it is only meaningful if they actually stop it.