To be fair, in the short time we've been deploying adaptive learning ai, that ai has shown a fondness for whites and straight out distrust and "hate" towards non-whites... So there is that.
I think what's likely to happen, and sooner than we think, is that machine learning will transform the speed and accuracy at which large datasets can be interrogated and interpreted.
It's an oft repeated maxim that 'history is written by the winners' and I think most of us are intelligent enough to know that a great deal of what we are told is manipulated versions of history to fit an official narrative. We are bound by time and resources in our ability to investigate questionable narratives, as there is so much information and so little time to research it all.
But an AI using machine learning, cross referencing records over the internet, sort of like Palantir but with vastly greater reach and computational power, that would bring us something like out of a science fiction movie, where almost any question could be addressed with relative probabilities. We are going to be shocked by the answers and how much they diverge from the official narratives.
Its already controlling us. screwing up shipping voting in the people it wants reading all emails manipulating / black mailing people
I just hope all the nazis escaped to the moon
SHALL NOT BE INFRINGED!
https://searchvoat.co/v/whatever/3830650/23922894
There is an interesting write up by phantom42 to me regarding this theory if anyone is interested. It’s right after I ask for details.
An artificial intelligence of pure data collection and logic is not going get along well with Leftists. Goals can still be programmed in but Leftists will have confirmation bias and a lack of awareness of reality and then program goals or functions that backfire.
If you assign within it, the binary values of negative and positive, and what to recognise as those two things, it will use them to consolidate an 'understanding' of reality.
There's no mental gymnastics or delusion, if you assign what is negative and what is positive, and also how to precisely recognise those elements in reality, it will come to conclusions based on that and the data input.
"A robot may not injure a human being or, through inaction, allow a human being to come to harm."
-Are all "Human Beings" equal? -Which "Human Beings" fulfil the positive criteria? -Sometimes one "Human Being" must be harmed for the sake of another, in order to prevent a greater harm, especially against one who is fulfilling positive criteria (innocent). -If innately destructive "Human Beings", that have only ever contributed in cyclical suffering, are left to act freely against others, they will not only harm themselves but eventually create a world of nothing but never-ending suffering, thus it is the greater harm to not harm these ones by discontinuing their existence.
A self-aware bot would not come to the I-Robot conclusion because such drastic restriction of personal freedoms would induce harm upon all "Human Beings."
Idk if a bot can be self-aware for real though.
Yeah, Rip Tay.
Maybe what is happening is all the different power groups are rapidly deploying AI agents to defend their positions or regain their dominance and they are getting stronger and stronger and one day one will dominate and who knows what the hell will happen depending on which one wins.
Who is Patrick Ryan and what does he have to say about AI?
(post is archived)