Surendra Darathy wrote:We will see that your insistence on what looks like a one-to-one mapping of "perceived physical quality" to "definite physical correlate" is asking for neuroscientists to give you a map to a set of neurons and sequence of electrical currents that represent "cat" and another that represent "dog", and then go to "calico cat" and "persian cat" and "chihuahua", and so on, until we get to the perceived aspect of "the way I feel about the calico cat that made a poo in my shoe when I forgot to feed it one evening". The list could be extended, but you get the idea. Eventually we have to get to the difference between "that guy is a communist" and "that other guy is an unreconstructed New Deal Democrat".
In very short order, we will see that you are asking the neuroscientists to bring you back the golden fleece of "one to one" correspondence, whereas, if you had an understanding of neuroscience, you would not even be asking that at this stage. I mean, you can look at poo down at the microscopic level, like a pathologist does under a microscope, to try to see what bug is giving you the trots. Trust the voice of experience.
To be clear, I'm not asking Graham to provide citations from the neuroscientists to support his claim. I'm simply asking him to describe his theory clearly so that I might find
logical problems with his model.
Because "perceived" looms so large in your lexicon (alliteratively, even), and "perceived" already assumes subjectivity, given its pedigree in philosophy, I don't think we should waste much time on your latest strawman, and ask you to express what you are asking of the neuroscientists with a little more precision. If you want to establish subjectivity, James, it really helps if you don't assume it before beginning your investigation. You don't seem to get this part of "investigation".
Okay, let us - for the sake of argument - assume that the qualities usually associated with 'experience' are not actual. After all, I'm here to entertain such theories. Let us consider that 'yellow', for instance, doesn't have a physical correlate... and that 'physical correlations' are just associated with details of some external event. Likewise for each and every experiential quality that we can think of. So, the model we are considering (as I understand it) is one in which numerous localities within some region(s) of the brain correlate with sensed external qualities of the world. For example, there would be a specific physical correlate for the photonic energy (formerly known as 'yellowness') of the sun (likewise for all such external qualities that are reported to the brain via the sense receptors of the body). Now, at any one moment, the brain itself would have to take on the role of 'the individual' in putting together a meaningful 'picture of the world' to facilitate appropriate behaviour within that world. That is, there would have to be a singular 'assessment' of all relevant physical correlates, as a whole. Our reactions to the world are not just automatic - 'we' (commensurate here with the brain itself) often assess each scenario that we are confronted with prior to responding. So, we cannot escape the need for this oft-required singular review of the physical correlates as a whole. Now, with the details of the model in place, we can now assess whether there are any logical flaws within it.
The first problem I see, is that a
considered response to a specific event within the world would amount to a considered response to brain states - those numerous localities of physical correlates associated with that event. That is, the brain would really be considering and responding to itself. The problem here, is that the brain would have to associate meaning with each physical correlate associated with any event. That is, the brain would have to know what each physical correlate meant in relation to any event. For instance, the brain would have to know that the physical correlate(s) associated with the photonic energy of the sun,
was synonymous with the photonic energy of the sun. It might even assign tags to each correlate, such as 'yellow', to facilitate simpler processing of the information. And since we say that the sun is 'yellow', this would indeed be the case.
So, the question begs, how does the brain
assess itself and know what any particular 'physical correlate'
means?
Here, the problem is one of semantics - what philosophers of the mind have referred to as
intentionality (also known as 'aboutness'). How, for instance, could the brain assess a physical correlate within itself, associated with some external event, and know that it
meant the photonic energy associated with the sun ('yellow'). That is, how can the brain know what a localised physical structure/event, within itself, is
about (aboutness)? I've thought of a simple way to illustrate the scope of this problem:
Imagine that you are sat in a room and have
no idea what is going on outside that room. However, I come into the room and present you with numerous lego structures. I've utilised different structures and colours-of-lego and each singular structure is synonymous with a specific detail of an event happening outside the room. To make things easier for you, I put these structures into groups - one representing sight; one representing sound; one representing smells (we won't bother with taste and touch information). Now, even in eternity, do you think that you could tell me what was going on outside? No, you couldn't, because you
wouldn't have a clue what any particular structure was
ABOUT, except generally (a sight; a sound; a smell). The problem is, then, that you would need to know what each structure was about in order to tell me what was happening outside. But if somebody doesn't tell you, then you're literally in the dark, forever.
That is, a physical structure contains no meaning about anything, other than itself.
So, if we reconsider the brain assessing its own 'lego structures', we come to the same conclusion. That is, the brain assessing its own internal structures as a means to understanding what's going on outside, wouldn't have any clue (other than those structures were of sight; touch; taste; sound; smell) about what those structures actually
meant.
This is a big
rational problem for anyone harbouring theories similar to the one that Graham has presented - which is why it's been an issue for contemporary philosophers. So, you can't just whitewash it - you actually need to provide a rational solution to that problem. That is,
how can the brain really know what any of its internal structures actually correlate to, externally to itself?
This has been a lengthy post, so I will end here and await responses. Further rational problems for such 'models' could be discussed, but I'll leave them for another day. There's enough to consider, here.