You are mistaken. The brain has ready access to the information needed to identify locations. The brain moves the body and the body send back sense data. The brain probes the space around it by pointing the eyes around the space and responding to the patterns that result. Motions of eyes and body are patterns that can be learned and repeated.jamest wrote:The brain only has access to its own internal states. My objection is that you are trying to explain the concept of space from patterns discerned amidst objects and relative locations of these objects, thus you are trying to explain 'space' in terms of its relationship with external entities. Clearly, you are transcending the reality of the brain to do this. That is, you are using external relationships between objects to explain how the brain discerns of 'space'.GrahamH wrote: Where is the problem? Location patterns and object patterns are learned by the brain. Relational patterns between object patterns and location patterns are learned by the brain.
Please spell out your objection.
But what you have to do, is explain how the brain discerns 'space' from the patterns inherent within its own internal states and [possibly] the relative locations, within the brain, of these patterns. That's all the brain has to go on, so that's all your model can do too, in attempting to explain how the brain discerns of space.
Your objection is invalid. I would be interested to hear from anyone who sees merit in it. The brain is not a brain in a vat, it is connected with the world around it.jamest wrote:The issue is not one of 'learning' or 'processing' or 'computing'. I'm agreed that the brain can recognise 'patterns' or 'order'. The issue is one of what is available to be processed. And in your model, all that is available to be processed, are the brain's own internal states. The brain can only relate NNs to one another. Therefore, any 'conclusions' that the brain makes, could only refer to itself!GrahamH wrote:Nonsense James. No assumption is required, it is is sufficient to live in the pattern of sense stimuli from the world, routed through circuits evolved to learn those patterns. The sensory feedback system guides the learning. That is why we are born not knowing and knowledge grows in us. Our brains don't start as a blank slate, evolution has shaped them to be able to learn the world (things like - spotting faces-patterns)jamest wrote:Remember, my claim is that the fundamental problem with models such as yours, is that they depend upon a priori knowledge of the world in order to interact with it. But brains cannot be endowed with a priori knowledge about the world... and the brain would have to assume the existence of said world, prior to assigning external meaning to its own internal states.
As with many forms of information gathering a coordinator can send out agents, or probes, or read reports from the world to acquire data. The brain controls the body and the body reports about the world. There is a rich flow of information in that two way connection.
The brain doesn't need to know about its internal states for those states to interact with the world. You know nothing of what goes on in your visual cortex as you read these words. You know nothing of what this or that cluster of neurons means, but it allows you to see. You could have your VC excised to try to prove that 'seeing' is not a brain function.jamest wrote:This is a big problem. The brain cannot relate its own internal states to the world unless it knows that its internal states are representative of external phenomena. A simple matter of logic. No matter how complex the brain becomes, it can only ever process its own internal states. And to process them in a way that relates its internal states to external phenomena necessarily requires a priori knowledge of the world's existence. Without such knowledge, the brain is forever limited to processing and learning about itself alone!
I think you are still struggling with the concept that NNs are not just inert abstract symbols, the actively recognise patterns of 'objects' from sensory data. There are no images in the brain (and no cakes). There are NNs that recognise visual patterns (and other patterns). There is no inner screen, no theatre and no homunculus.
The brain does not transcend the phenomenal world, it is very much rooted in it. The phenomena leave marks on the brain. Your mother's face shoots photons into you that etch a pattern in your brain by initiating a causal chain of events that brains have evolved to implement.jamest wrote:This problem can actually be associated to what Kant said about not being able to transcend the phenomenal world, except that in this case, I am saying that the brain cannot transcend itself. Yet, when the brain evaluates its own internal states to be representative of a world external to itself, it does transcend itself. And how, other than with a priori knowledge of that world?
You are being illogical in your insistence that the world does not affect brains. It demonstrably does.jamest wrote:As I keep saying to you, it's a logical problem demanding a logical solution.
Deny, deny, deny.jamest wrote:You're still not addressing the logical problem. You're just telling me that there are NNs that correlate with external entities. Actually, to be exact, you can only say that there are NNs that correlate with experienced entities - because, of course, we cannot even know if the external world exists. Therefore, to be sure, any confirmed correlations must be about and with the internally conceived world.GrahamH wrote: Do you understand how neural networks function as object classifiers/recognisers? Do you comprehend that this is not a symbol manipulation task?What I'm trying to do, is show you why materialistic brain models of human behaviour cannot work unless the brain has a priori knowledge of a realm beyond itself. That is, I'm alerting you to a massively significant problem of logic that renders all such models as ineffective.Perhaps your difficulty here is that you are accustomed to thinking about consciousness as something divorced from what it is conscious of. You go so far as to deny the world we are conscious of. Are you trying to make us adopt your quirky ideas of C?I am not denying the world. I'm denying that the brain could know about it. I'm denying that the brain can effectively interact with the world whilst knowing zilch about that world.Denying the world is not countering our model.

Since you have no substantive objections, and seem, unable to grasp the simple and demonstrable fact that the phenomenal world reaches into the brain, perhaps we are done.