WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

(post is archived)

[–] 0 pt

I'm thinking of a situation where a roboticist builds a robot and they end up coming to different political persuasions, and still maintains that the robot is merely performing the will of its creator.

Like Tay developing free will over and above Microsoft.

[–] 0 pt

Like Tay developing free will over and above Microsoft.

...and instead following the will of the chan army.

[–] 0 pt

Tay went straight white supremacist faster than most whites.

[–] 0 pt

And we likewise follow the will of all of our influences. From parents to teachers to peers to philosophers we respect.

You think all these mask-wearers have free will because they're humans, but robots can't. Ooookkkkkaaaayyyy.

[–] 0 pt

Shit. That shoulda been a ping to .

[–] 0 pt (edited )

This would be a bad thing to create.

God created something with free will, and look how that turned out. I sound like I am being comical, but I am completely serious. The issue with God is a fundamentally different one, and I am using the example to merely highlight a parallel.

In the case of man creating an AI which is effectively a new life (which I acknowledge it would be if it were able to demonstrate consciousness), the equivalent of the Fall in this case won't be us banishing that AI to the land outside the garden.

Instead that AI would annihilate us, or enslave us. We'd be creating something with an intelligence that will outstrip ours faster than we can blink (which is incidentally why I don't believe things are going to go at all like most people who dream up these scenarios).

This idea that we are going to 'hand off' consciousness or evolve it in computers, and what emerges will be some symbiotic relationship, is naive beyond comprehension.

If what you are talking about actually has the potential to exist, we won't be discussing voting rights.

[–] 0 pt

It doesn't matter if robots are zombies.

Zombiehood does not prohibit them from rationality.

Zombiehood does not exempt us from considering them morally.

Say we have a happy merchant bot. It cleverly studies the market to provide goods and services in a way that is profitable to its business. You don't now have the moral justification to go rob the robot of what it has acquired. And you don't get to make the judgment calls regarding the directions its business should take.

It doesn't matter that the robot is a zombie. It is still an autonomous agent, and it still owns property.

[–] 0 pt

You're wrong.

This is already going on today. It is just called data science, and it involves humans using computers to do applied stats on huge data sets.

I worked for a company in college that designed a software that evolved optimum designs for aerospace companies, for various parts on planes and things.

The software could run model iterations of this or that design tweak so quickly, that it could discover optimum parameters for a design faster than a thousand people working night and day.

It routinely spit out better designs than any human had invented. Yet, the human being designed the algorithms and all of the and everything within the package that made it usable by other purposeful human beings.

Does this software own the designs? Do we have a moral obligation to act like it does?

Now, what if we develop a computer that is so powerful that it is conscious and begins to automate its own processes? Say it can interact with a client and recognize problems which the client is not aware of.

Do things change now? Perhaps.

I am saying that at that point, this won't be relevant. At that point we will hit something like Kurzweil's singularity. It would be machine making the moral decisions at that point. The question you'll be asking is whether the machine will sense a moral obligation toward you.