WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.0K

(post is archived)

[–] 0 pt

when things like neural networks get advanced enough to properly simulate consciousness, at what point does a bunch of code stop being different than the bunch of neurons in the human brain?

The problem with this (and what Penrose proposed) is that it presupposes that physical processes cause consciousness. It may be that consciousness came before matter, and matter is a development of it, rather than the other way around.

It's important to properly define consciousness before starting to make any claims about it at all. What are it's parameters? Does it exist on a scale of development (i.e. are there varying degrees of simple and more complex forms of it)? Where does it reside? Is it holistic? Is it divisible?

If a neural network were to be designed that was conscious (by whatever definition we give it), would it be a property of the software, the hardware, or both?

Just some thoughts....