You're well and beyond me at this point, so forgive me if I'm somehow reiterating something you've already said, my question is this:
Must there be an object of thought for there to be a thought at all?
Of course thats getting into some thorny issues. For example, are all the various combinations of potential movements generated in the PMC also 'thoughts'?
Is it only a thought if we are aware of it? What about 'perceptions' which continue to operate on and color our experiences, even when we are unaware of them?
you're well and beyond me at this point
It's beyond me as well.
Must there be an object of thought for there to be a thought at all?
Fascinating question. Honestly, I don't know that we're going to be able to get into every consideration. Of course, the question hangs on the way that we define both thought and consciousness. If consciousness is just what has thoughts, and thoughts are what your consciousness does, it seems like we'd necessarily be specifying thought as having not simply intentionality, but qualia. In other words, to pass muster a thought must achieve a supraliminal state. Immediately, having just really put one foot down on the ground, we're already catching arrows from people of a Freudian or Jungian persuasion, who'd opt to grant an active life to the activity in the unconscious, despite not being available to our direct attention.
The ontology of thought is a thorny issue. As you've also pointed out, there is also this issue of distinguishing these states of potential in the PMC (which could simultaneously incite movements, sensations and percepts) from thoughts, if it is possible to do so. So we ask: are these 'thoughts', or would we say that these mental states have the potential to generate thoughts, say, if they meet some liminal threshold? I don't ask as if to pretend I have the answer. Frankly, by 'jumping in' at this point, it so often seems to be the case that you wind up in a quagmire that ultimately suffocates the activity. To make any sense of mind, I think you must begin further upstream. (Then again, the way you have asked your question might just liberate thought from the mind entirely, making metaphysical speculation about the mind just more muck!)
There is a simple way to approach your primary question (quoted above), which is just to stipulate the definition of thought (as we have done in the first paragraph). If I define thought as what crosses the threshold of attention then, in principle, it must be possible to be aware of thoughts (even if it is possible that a thought exists in the attention for such a short duration that it practically fails to register). It follows that thought must also have an object. Thought without Form cannot be thought, because Form is precisely what confers the 'aboutness' to a thought. Someone could say: "But when I meditate the goal is to empty my mind of intentions and objects. Yet, surely I am still thinking!" I'd agree, but I'd also say that meditation specifies a goal which is fundamentally unachievable, and the gifts of meditation are not in the achievement of any state, but in the receipts that issue from your trying to reach that state (and failing).
What about 'perceptions'
Perceptions are tricky. Does something meet the level of a percept if one is not aware of them? Perhaps the better question would be: how do we know a percept exists in the mind if the individual is not aware of it? If the mind cannot make a report to us about the perception, why should we think there is one?
Let's do a thought experiment that I think will be very useful. Imagine we anesthetize a person. While they are under we hold open their eyes and pass color cards in front of them, while simultaneously testing the neurons in the visual cortex for activation. What we have found is not a perception, but a neurological response to a stimulus. That response certainly encodes information in the digital state of neuron clusters, as in frequency coding models for action potentials. But what actually exists in that neural state? This encoded information is not directly the contents of, say, a color experience.
Now suppose we hooked that same brain up to a device that could amplify and convert the digital information in the visual cortex into images on a screen. This is something well within the realm of possibility. We need to think epistemologically about the ways we know (or think we know) a perceptual content even exists in a brain/mind.
We see that there are 3 ways to know if a brain state has perceptual contents. We can:
Accept a direct report from the person who is perceiving.
Project the information encoded digitally in a brain state through a hardware interface which maps the digital information to a form which is interpretable by another piece of hardware/software combo that projects the new information according to light or sound waves (as with a computer monitor or speakers).
Make an inferential leap that activity in this brain region must contain perceptual content because destroying any of the associated structures within the circuit (from the eye to the optic nerve to the visual cortex itself) results in a loss of percept-experiences discoverable by either (1) or (2).
In (1), we assume a speech act. There is no manner of speaking which lacks any object. In (2), we still assume an object. When we map analog signals to hardware that is capable of converting them, what are we converting to? The entire method has the assumption that there is some meaningful object to be represented as either visual or audial data! In fact, it wouldn't make sense that we adopted this approach if we thought the neural firings in that brain region weren't encoding some meaningful object. In (3), we are able to make our inference because of what? Because the objects of visual perception have been eliminated from thought! That subject goes right on thinking, but not seeing.
Tentative Conclusion
Percepts appear sufficient to give contents to thoughts, and even to result in the formation of thought in some cases, but are not themselves necessary for thought. A person can have senses removed and still be thinking things. If we imagine destroying all of the regions of a subject's brain that were responsible for organizing sensory data (without killing that person), we could further show that all thoughts have an object. What we've done here is to eliminate every possibility for objectively/empirically analyzing this person's thought - instead, causing ourselves to rely on (1), or the privileged reports of the subject about their thinking (again, this would involve a speech act which necessarily has an object).
You see, there is no way to talk about thought, to study it, to even conceive of it as not possessing an object. This relates closely to the position I laid out in this post. Even when we attempt to consider nothingness, we still bracket it {...}, indicating that the most primitive object of any thought is a state of affairs, or the field of potential where things can happen.
Lastly, this cuts across the distinction we made in an earlier comment between objectivity and subjectivity. There is an objective way that any mental state could be tethered to an object. Hell, there is a way that any neurological change could be said to have an object, just given what neurons do, which is to respond to other things (other objects or other neurons). For example, we could study a primitive organism which is not thought to be conscious whatsoever, yet which has the rudiments of a nervous system and is capable of responding to its environment. Say that this organism is capable of sensing temperature change, and when it encounters intense heat or cold it moves away from the source.
Isn't it the case that any form of learning or beneficial change in the organism's behavior could be said to have an object? Even individual neural firings that properly register some physical phenomenon in the environment must have an object (in this case a cause that they register). It seems that to have an object is just to be able to register, via some mode of information, a cause! Either this, or an end. This is just what we think nervous systems do.
Another way of making this final point would be to consider the most physically reduced possible world - one with no consciousness at all. Suppose we have two objects A and B (they can be billiard balls, or whatever). A collides with B and causes a change in the behavior of B. The change in B's behavior has the outside cause of A at its root. Therefore, this behavior itself (though not qualifying as 'thought' in any common sense) is directly correlated to an object, which is to say B's behavior has an object. In one way of thinking about this, B has a liability to behave a certain way under certain conditions. One of these conditions involves powers that are exerted on it by other things. When A displays its basic powers in such and such a way toward B, B has a liability to modify its behavior. Things don't like to change - in fact, they resist it with great force. So when an object changes, this must have an object, even if what constitutes that object is the minimal liability B must suffer to change its behaviors (physics corroborates this with the Lagrangian function and minimal energy).
So, the argument I'm truly making here is not just that thought always has an object, but rather that all phenomena have an object. Everything in reality is an encounter, namely encounters between objects with certain powers and liabilities, not the least of which is their own final cause.
When we make the distinction between objects of thoughts having qualia, we aren't distinguishing between thoughts with or without objects. The issue is one of registration. If, like I have argued elsewhere, the brain is not a producer of mind but a filtration mechanism, then we can think of it as a filter for God mind, where God mind is registration of all information. To register all information precludes the local, finite narratives we take to be our lives. Instead, we are local streams of focus. We register as conscious experience only the causes in the world that are most relevant, according to your preferred theory of motivation. There appear to be underlying most theories of human motivation a metaphysical set of basic categories - these are the interest of all theories of motivation even when the theorists are not aware of them. I believe they formed a large part of early human wisdom, including the categories set out by astrology.
Put another way, you are local point of attention which filters the sum total of reality according to your end, or final cause, combined with the natural way that your environment antagonizes that end.
I'm pretty sure a big part of Sartre's phenomenology was contending with the object of thought-consciousness. But I don't remember enough of Being and Nothingness to comment thoughtfully.
Although just that title should indicate that he probably said something of value or relevance to the present discussion. Then again, he was a phenomenologist, so maybe I'm being too generous assuming he said something valuable.
Just kidding. Sort of.
I read some Sartre one time (Nausea), and he’s a hopeless faggot. I wasn’t impressed - my use of “phenomenological” isn’t tied to Sartre’s, despite there likely being at least a little overlap in content.
It was simply to contrast with “ontological” being concerned with existence, and I wanted to focus instead on happenings as the fundamental “atoms”.
I honestly don’t know what Sartre was talking about for sure, so it might or might not be the same reason. I’m not too concerned with it though.
I came up with my phenomenology before I read St. Dionysios’s Celestial Hierarchy. I’m glad I did, because it gives me a snapshot of my pre-Christian mind. It found completion in Christ, since it was true, and all truth is God’s. So I relate to those pre-Incarnation truth-seekers who discovered the same thing I did in the Gospel.
Just kidding. Sort of.
Haha. I tried to take a less phenomenological approach above for this very reason. Although I think that method is important, I think people are naturally skeptical of it as an approach to mind itself, just because it lends itself to it too subjectively. There's a sense that people think the 'sun can't shine on itself.'
I haven't read much primary Sartre. I've read quite a bit of secondary literature that mentions him. As it pertains to this conversation, Sartre was an opponent of Freud, and he denied Freudian ontology when it came to the unconscious. For Sartre, all mental objects were possible to attend to if one is reflective enough. It situates itself right in that Existentialist groove, right? It's the difference between truly conscious persons and ordinary objects, where one can be for-itself (as consciousness strives to do) and one can be in-itself as being just is.
It's an interesting distinction, because he says that pure being in-itself is God, which is absolute identity, or perfect identity and control over one's destiny. Sartre says that this is impossible for man, although we strive to do this constantly. Instead, consciousness is what it is, and we feel this tension always to limit consciousness into this or that role, to define our identity because we want to be God. He says the struggle is basically futile since to do so would be to have control over the being of all things, over the destiny of all being. Since this is inaccessible to man, living this way leads to living in bad faith. This is just to live inauthentically according to rules and values that are given to us from without.
So naturally, he was one of these liberationist types who I think completely misses the point. It's as if he wants to say that since once cannot be God, that one could not possibly reason about God in the world or the Logos. It actually all comes off rather satanic.
I don't ask as if to pretend I have the answer.
Ha, something I do too often unintentionally. Begging the question is a bad habit.
I think there is a case to be made that perceptions are prior to thought, or at least parallel to them. First theres the studies here https://medium.com/@kennelliott/39-studies-about-human-perception-in-30-minutes-4728f9e31a73
What appears to be true at least from a cursory look, is that perception is not, ordinarily, an attentive process. We don't have to be aware of it for it to effect how we perceive objects or thoughts.
How do we know percepts exist at all? Well really, we only know through testing. I suppose in a broader sense you'd be correct to suggest nothing exists apart from its relation to something else. Temperature for example is only measurable in relation either to a thermometer or relative to other temparatures, and so on.
What we have found is not a perception, but a neurological response to a stimulus.
Yes, but it has meaning only in relation to something else. Representation carries semantic information.
So when you ask, "what actually exists in that neural state?", I would say, I don't know but its probably something along the lines of 'a neuron with a state such that its representation encodes some transmissible information."
Theres a duality here between representation and information, which I think isn't emphasized enough because we get caught up in mechanism over message. Every key is a model of the lock that it opens. And in a world where there should be nothing, where there shouldn't be 'gross' information at all, let alone anything, it may just be sentiment, but I find it remarkable that not only is there something, but information may be transmitted between mediums that act as representations at all.
You hit on something on interesting: "which maps the digital information to a form which is interpretable by another piece of hardware/software"
The keyword being 'interpretable'. Whats a gramophone? A transcoder. Takes information in one representation and converts it to another representation. The act of transcoding is not the act of representation itself, but rerepresentation (if thats even a word). So it would appear interpretation and transcoding is as essential to representation as the medium which holds the representation. Which is to say, a thing can never be represented except in relation to another (which you already stated), but more broadly it is the act of 'packing or 'unpacking it' in conjunction with the representation, that provides the utility of the inherent representation. Without it the representation is a blackbox, it might as well be the empty set-- which is why I wrote "encodes some transmissible information." as a prerequisite.
Which is to say, everyone talks about information, no one talks about entropy.
This was fun and I really enjoyed reading your post, I hope you get more responses and do more like this.
I think there is a case to be made that perceptions are prior to thought, or at least parallel to them.
I think I may have gotten myself into a bit of trouble in my last post. I reread it at one point, and I think at points I was conflating sense data and perceptions. To be honest, I don't have the time or desire to revisit that earlier comment and possibly revise the whole thing, and that's without knowing where my failure to make the distinction actually impacts the outcome. I just want to get clear on how I view the situation, without committing to a formal philosophy of perception.
Sense data just is what it is, but must be selected and organized through the mechanisms of perception.
It might be more helpful, instead of taking sense data and perceptions as natural kinds of brain information, to view them as stages in the architecture of thought. An analogy could be useful. A house begins with a set of plans. Then you set a foundation. You begin to frame on the foundation, building up walls and levels sequentially, beginning first with the stick skeleton of the home's interior and exterior, and then refining with more and more specific functional elements (doors, windows, exterior siding, electrical).
In terms of the brain/mind, let's call the thought a finished home with a family inside. Let's also say that something is liminal if it meets the level where thought can attend to it, in principle. A thought, then, must be something on which I could potentially reflect (although being reflected upon is not necessary to being a thought), while a perception can become a thought if it is reflected upon, although most acts of perceptions qua perception are not noticed. We could imagine something perceiving the color red without having thoughts about it. So perhaps what I really mean to do is just stipulate that a thought has a semantics and it is, by definition, something which involves a kind of reflection, implied by the semantics itself.
I might even go as far as to say that thought just is formal and conceptual semantics applied as an envelope to more primitive organizations of even more primitive structures. This is how something could perceive the color red and merely behave but not think.
Returning to the house analogy, sense data might be something like the raw materials. These are already pre-cut lumber by the time they reach the brain, thanks to the upstream work of the lumber mill (our sensory neurons). Perception would be the interactions between the plans and the raw materials that begin to assemble the home's basic structure. Finally, it's the living within the home by a family that is thought.
All of those materials that went into the home could be used for a different home, that is, they have an intrinsic value. Each piece or section of the home is something representative of the final cause of any home, the way that place is to be used. The point is this is something generative, with levels of assembly, and perceptions don't have to ever reach the level of being a home - perceptions can disappear or be deconstructed. Thought can 'move in' to these structures as a kind of formal and conceptual semantics that gives them a sense. A house is not a home until it is sensed that it is.
How do we know percepts exist at all?
I think we can infer it logically. Pretend I place you in a room with a lamp. Your body and the lamp are 10 ft. apart. I ask you, "Do you see the lamp?" In effect, what it is to see the lamp is a complex process involving sense data, perception, and thought (because you've now been asked to reflect). Let's say that besides you, I've brought in some controls to test in other rooms.
I can infer the existence of sense data by placing an opaque screen between one person and their lamp. If the person cannot see the lamp, then this external wall has prevented something involved in the chain of mental representation. It doesn't seem to have prevented sensing all things, or thought itself, so it is logically preventing some primitive kind of information from obtaining. We'd call that sense data.
I can infer the existence of percepts if I obliterate someone's visual cortex. It should be possible to study whether cells in the retina and optic nerve are active in the presence of the lamp light, but if I ask this person whether they see the lamp, and they say 'No', I'd wager that something is now missing between the sense data and the thought. Such a person who cannot see is still able to understand what a lamp is, or what light is, and the basic sense of what I'm asking about a priori. They just can't see it. By analogy, the raw materials for the house are all sitting on pallets in the yard, but there are no plans to assemble them.
Here is where things get interesting. As I see it, there is no logical way to infer thought except through the direct report from the subject.
Two problems become interesting to me.
(a) Someone could say that I've made an error in (3). A machine could be subjected to each of these tests and could be made to demonstrate the same results, even down to being able to report that it is seeing. So the question as it concerns thought becomes whether it must be attended by qualia. Moreover, if reporting is the best we can do, how do we rule out that the machine is (or is not) having a conscious experience?
(b) Can thought actually be separated ontologically from sense data and percepts? I believe this one is particularly important. To see what I mean, imagine that we could grow fully functional human brains, identical in structure and all empirical signs of activity as any healthy human brain. Suppose we did this for a brain in a vat. This assumes a brain without any sensory inputs (ignore that it may be impossible to eliminate all sensory input).
Could such a brain have thoughts?
We could get into all kinds of philosophical debate about this. I've already gone on for too long, so I'd leave on this idea: that whatever human mind is, a part of its very nature cannot be separated from its embodiment in an environment. The brain itself happens inside of a body, which is in communication with itself and the environment. Brains do not just grow absent their relations to the world.
@PS is someone who could talk about this notion much more aptly than I am able. There is this notion that things which are intelligible in nature only have this property of being intelligible because they inhere as the result of a Substantial Form. This is true of the lamp, and it is why thoughts about the lamp have the character they do, which is to say why they have their logical sense and qualia. However, what is true of the lamp is also true of the human person: we inhere as the result of a Substantial Form, and this is what we refer to when we say we have a soul. The soul is the Form of the human person. Another way to think about it would be to consider it as the 'missing piece', which being absent even having our 1. sense data, 2. perceptual organization, and 3. otherwise fully functioning brain, would prevent authentic conscious thought from happening.
In the brain-in-a-vat example, we might think there exists a way to 'record' the mental activities of that brain in a way that would produce a linguistic representation if they were occurring in such a way as to have any meaning. Would we expect such a brain to have an 'inner life' where it depicted certain kinds of images and imaginary events according to categories of space, time, causality, etc? Incidentally, some people might jump to the conclusion that primitive archetypes like Jung's would still be here, but I argue that isn't the case. Implicit in Jung's theories is a Lamarckian ontology: we pass through all of the phases of our race in the womb. To grow a brain in a vat is to grow something separately from the very generative pleroma of the 'conscious field' of the mother (if you will).
There is a real sense in which we 'fall into' or 'jump into' life like we are jumping into a stream. You don't start or stop this thing. You get on the ride while it is already in motion.
The potential that exists in the human Form (soul) is actualized in a very specific way, one life begetting another within the cosmically important womb. The point here is the notion of a thinking brain which has been grown isolated from both a body and an environment is unlikely to have conscious thought. What, after all, truly makes the brain-in-a-vat with all of its wired connections to this or that readout machine altogether different from the machine we mentioned earlier?
To me, it's that we are on a stream, a flowing river, that man cannot recreate. It's been flowing. It was already flowing when we became conscious enough to realize we were being pulled along by the current. Life begets life. You don't start a new stream in the one which you are already riding: it's all one stream. Even so-called conscious machines are just going to be elaborations of man, not a new stream, but extensions of the existing one - they carry within them something derived from us, a Logos.
I'm ranting. I get like this sometimes.
And in a world where there should be nothing, where there shouldn't be 'gross' information at all, let alone anything, it may just be sentiment, but I find it remarkable that not only is there something, but information may be transmitted between mediums that act as representations at all.
You would have probably been very interested in a theory introduced to me by @PS called CSI (Complex Specified Information). That resulted in some really enlightening conversations, but it pertains to precisely what you are intuiting above. I completely agree with you, btw.
Which is to say, everyone talks about information, no one talks about entropy.
In one way of thinking, called Shannon-Weaver information, the information content of a string is its entropy. I do think you'd find the topic of CSI pretty engaging given your interest in this idea. Here is a paper by Dembski that explains the underlying logic.
For what it's worth, I spent weeks trying to refute CSI as I debated PS on the issue. Not only was I unable to do it, I wound up becoming convinced that it's true. If I get time I can try to dig up a conversation we had at Poal back in January sometime. What became apparent to me was that information is tied to symmetry.
This was fun and I really enjoyed reading your post, I hope you get more responses and do more like this.
Yours as well; we have our moments, don't we. We aren't as active as we were months ago, but this sort of thing is always enjoyable.
(post is archived)