WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

(post is archived)

[–] 0 pt (edited )

This would be a bad thing to create.

God created something with free will, and look how that turned out. I sound like I am being comical, but I am completely serious. The issue with God is a fundamentally different one, and I am using the example to merely highlight a parallel.

In the case of man creating an AI which is effectively a new life (which I acknowledge it would be if it were able to demonstrate consciousness), the equivalent of the Fall in this case won't be us banishing that AI to the land outside the garden.

Instead that AI would annihilate us, or enslave us. We'd be creating something with an intelligence that will outstrip ours faster than we can blink (which is incidentally why I don't believe things are going to go at all like most people who dream up these scenarios).

This idea that we are going to 'hand off' consciousness or evolve it in computers, and what emerges will be some symbiotic relationship, is naive beyond comprehension.

If what you are talking about actually has the potential to exist, we won't be discussing voting rights.

[–] 0 pt

It doesn't matter if robots are zombies.

Zombiehood does not prohibit them from rationality.

Zombiehood does not exempt us from considering them morally.

Say we have a happy merchant bot. It cleverly studies the market to provide goods and services in a way that is profitable to its business. You don't now have the moral justification to go rob the robot of what it has acquired. And you don't get to make the judgment calls regarding the directions its business should take.

It doesn't matter that the robot is a zombie. It is still an autonomous agent, and it still owns property.

[–] 0 pt

You're wrong.

This is already going on today. It is just called data science, and it involves humans using computers to do applied stats on huge data sets.

I worked for a company in college that designed a software that evolved optimum designs for aerospace companies, for various parts on planes and things.

The software could run model iterations of this or that design tweak so quickly, that it could discover optimum parameters for a design faster than a thousand people working night and day.

It routinely spit out better designs than any human had invented. Yet, the human being designed the algorithms and all of the and everything within the package that made it usable by other purposeful human beings.

Does this software own the designs? Do we have a moral obligation to act like it does?

Now, what if we develop a computer that is so powerful that it is conscious and begins to automate its own processes? Say it can interact with a client and recognize problems which the client is not aware of.

Do things change now? Perhaps.

I am saying that at that point, this won't be relevant. At that point we will hit something like Kurzweil's singularity. It would be machine making the moral decisions at that point. The question you'll be asking is whether the machine will sense a moral obligation toward you.

[–] 0 pt

I'm never wrong about anything, faggot. I'm infallible.

What if the robot is wearing a skin suit and is plausibly human. How will you justify that you're the one who can go strip its business of its resources? Its creator unleashed it onto the world to be free, not to be bound to him as the creator.

And without you knowing whether that is a robot or not, you have to treat it like a human.

Faggot.

So theft of the goods in its store wouldn't be wrong because someone is harmed qualitatively, but because of the wrongness of theft in-itself.