The subjective observer is a fictional character
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The subjective observer is a fictional character
http://docartemis.com/brainsciencepodcast/
The Ego Tunnel: The Science of the Mind and the Myth of the Self
Thomas Metzinger
The Ego Tunnel: The Science of the Mind and the Myth of the Self
Thomas Metzinger
Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
Re: The subjective observer is a fictional character
Very interesting.SpeedOfSound wrote:http://docartemis.com/brainsciencepodcast/
The Ego Tunnel: The Science of the Mind and the Myth of the Self
Thomas Metzinger
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The subjective observer is a fictional character
Yup. I just finished. I like the guy. I bought their iPhone app for podcasts too. Now I can sit in the whirlpool and listen to pods.GrahamH wrote:Very interesting.SpeedOfSound wrote:http://docartemis.com/brainsciencepodcast/
The Ego Tunnel: The Science of the Mind and the Myth of the Self
Thomas Metzinger
Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The subjective observer is a fictional character
Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
Re: The subjective observer is a fictional character
Computer Model Reveals How Brain Represents Meaning (June 2, 2008) — Scientists have taken an important step toward understanding how the human brain codes the meanings of words by creating the first computational model that can predict the unique brain activation ...
http://www.sciencedaily.com/releases/20 ... 141354.htm
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The subjective observer is a fictional character
Not to be outdone:
http://ow.ly/1qkNc
Firing on all neurons: Where consciousness comes from
These links working?
Also this is a great resource.
http://www.facebook.com/pages/Birmingha ... 619?ref=mf
http://ow.ly/1qkNc
Firing on all neurons: Where consciousness comes from
These links working?
Also this is a great resource.
http://www.facebook.com/pages/Birmingha ... 619?ref=mf
Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
Re: The problem of the Self
So what? The point is that each specific environmental event must have its own specific and corresponding NN or else there's nothing to differentiate one external event to another.GrahamH wrote:Different scenes does not imply different NNs! The same NN that recognises the tree is scene 1 also recognises it in scene 2 and some of it plays a role in recognising a bush, the cartoon illustration of a tree and an imagined tree.jamest wrote:Yes, moving the head left to right would entail different scenes - different NNs - but this doesn't answer my underlying question. That is, visual NNs are [deemed to be] responses to the photons emitted from external objects. BUT, the space between objects is devoid of any entities and events, so there can be no corresponding NNs that relate to that space. NNs could only relate to objects, then - not space.
Bollocks. Your 'model' doesn't allow for the perception of 'objects', or their 'location' wrt other objects. Your model only allows for certain brain states in relation to others. You still don't seem to understand that your model about NN states only refers to the world if the brain makes assumptions and inferences about its own internal states that allow for such external considerations. This is what the whole discussion pivots upon!We don't perceive space. Agreed. We perceive locations of objects, not 'void' between them. Perceiving locations of objects gives us 'separation', 'size' distance, absence...
Only if there is an a priori understanding of what 'objects' are in relation to that fuggin image!!!!!!!!!!!!!!Relative location of objects can be inferred from an imagejamest wrote:Of course, I'm of the opinion that space (and time) are [absolutely] constructed by the self. But your model cannot embrace this idea, for obvious reasons. Therefore, I'd like to hear your responses to my questions about 'space', here.
... That is, the brain would need to ASSUME that its own NNs equate to an image of external events, in order to respond appropriately.
The problem here, is either that you don't understand the problems inherent within mechanistic brain models of human behaviour, or, that you don't want to understand them. Either way, the issue is with your thinking, not mine.
My God; every time I ask you to explain something specific in relation to your own model, you do so in terms that require an a priori understanding of the world. And yet, you still claim that there is no problem!!!
FFS, you and SOS have been swapping links about correlations between brain behaviour and experience, all day - as if corelation = cause. But I've already explained why correlation does not equal 'cause', earlier within the conversation. It just seems to me as though you aren't listening. And I really can't be arsed to waste my time talking to people that don't listen to what I have to say, any more.
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The problem of the Self
Right right. The issue is with our thinking. And the thinking of thousands upon thousands of cognitive scientists everywhere. It's clear that you alone hold the key to this mystery.jamest wrote: Only if there is an a priori understanding of what 'objects' are in relation to that fuggin image!!!!!!!!!!!!!!
... That is, the brain would need to ASSUME that its own NNs equate to an image of external events, in order to respond appropriately.
The problem here, is either that you don't understand the problems inherent within mechanistic brain models of human behaviour, or, that you don't want to understand them. Either way, the issue is with your thinking, not mine.
My God; every time I ask you to explain something specific in relation to your own model, you do so in terms that require an a priori understanding of the world. And yet, you still claim that there is no problem!!!
FFS, you and SOS have been swapping links about correlations between brain behaviour and experience, all day - as if corelation = cause. But I've already explained why correlation does not equal 'cause', earlier within the conversation. It just seems to me as though you aren't listening. And I really can't be arsed to waste my time talking to people that don't listen to what I have to say, any more.

Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The subjective observer is a fictional character
Tell you what jimmy. Google neuroscience forums and then go post your criticism on three of them and see what they say. They will treat you very kindly because they get all sorts of young people and newbies. You get some of them to agree that you have stumbled onto the next great revolution in cognitive neuroscience, the one that stops it dead, and I will send you a cookie.
Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
Re: The problem of the Self
No James, your point was about perceiving space other than as separation between objects. I accounted for as separation between objects. Of course that is how 'space' is perceived, and why we have that concept. We don't need a NN representing every bit of space. We only need to associate object patterns with location patterns.jamest wrote:So what? The point is that each specific environmental event must have its own specific and corresponding NN or else there's nothing to differentiate one external event to another.GrahamH wrote:Different scenes does not imply different NNs! The same NN that recognises the tree is scene 1 also recognises it in scene 2 and some of it plays a role in recognising a bush, the cartoon illustration of a tree and an imagined tree.jamest wrote:Yes, moving the head left to right would entail different scenes - different NNs - but this doesn't answer my underlying question. That is, visual NNs are [deemed to be] responses to the photons emitted from external objects. BUT, the space between objects is devoid of any entities and events, so there can be no corresponding NNs that relate to that space. NNs could only relate to objects, then - not space.
You have been urged, many times now, to throw out your purile Logo brick analogy. If you are to comprehend the model all you realy need to do is acknowledge that pattern matching is a function of NNs. If you think of NNs as Lego you can't grasp the key idea, because Lego bricks don't respond to patterns like NNs do. NNs aren't abstract symbols. They are the very thing that recognise objects by their patterns. The semantics is built-in because the 'symbol' recognises the presence of what it represents.jamest wrote:Bollocks. Your 'model' doesn't allow for the perception of 'objects', or their 'location' wrt other objects. Your model only allows for certain brain states in relation to others. You still don't seem to understand that your model about NN states only refers to the world if the brain makes assumptions and inferences about its own internal states that allow for such external considerations. This is what the whole discussion pivots upon!We don't perceive space. Agreed. We perceive locations of objects, not 'void' between them. Perceiving locations of objects gives us 'separation', 'size' distance, absence...
It isn't such a difficult concept, James.
I gave you a reason why we can say NNs can infer, which is because inference is generalised recognition of patterns, which is what NNs are very good at. I invite you to give an account of inference and reasons why good pattern recognisers can't possibly do it. All you come back with this nonsense about a priori knowledge.jamest wrote:Only if there is an a priori understanding of what 'objects' are in relation to that fuggin image!!!!!!!!!!!!!!Relative location of objects can be inferred from an imagejamest wrote:Of course, I'm of the opinion that space (and time) are [absolutely] constructed by the self. But your model cannot embrace this idea, for obvious reasons. Therefore, I'd like to hear your responses to my questions about 'space', here.
... That is, the brain would need to ASSUME that its own NNs equate to an image of external events, in order to respond appropriately.
When you were a child and you saw many things for the first time, you didn't need a priori knowledge of what they were to see them. Until you had seen them you could not recognise them for what they were. After you had seen them you could recognise them. Your brain grew connection patterns in response to sensory stimuli such that it would respond when that pattern of stimuli occur again.
I suggest that accounts for why we think in terms of experience and our creative ideas are limited to concepts from prior learning. We cannot create ideas ex nihilo, but we can make new patterns from the concepts we have formed from experience.
It is plain that you haven't yet understood our thinking.jamest wrote:The problem here, is either that you don't understand the problems inherent within mechanistic brain models of human behaviour, or, that you don't want to understand them. Either way, the issue is with your thinking, not mine.
CIte an example given by SoS or me that requires an a priori understanding of the world, and I'll show you how you have misunderstood the point.jamest wrote:My God; every time I ask you to explain something specific in relation to your own model, you do so in terms that require an a priori understanding of the world. And yet, you still claim that there is no problem!!!
The limit of a priori, when it comes to brains, is that they are evolved to learn to respond to patterns, and develop differentiated functions accordingly (visual cortex is connected to optic nerve, etc).
If we don't seem to be listening to you it is because you have little to say about what we are discussing. You haven't understood it, yet you think you have the answers. Do you have any explanation for the correlations? Can you explain how the mind works? Can you account for perceptual illusions? Can you account for how thoughts are formed or where ideas come from?jamest wrote:FFS, you and SOS have been swapping links about correlations between brain behaviour and experience, all day - as if corelation = cause. But I've already explained why correlation does not equal 'cause', earlier within the conversation. It just seems to me as though you aren't listening. And I really can't be arsed to waste my time talking to people that don't listen to what I have to say, any more.
Denying causality doesn't cut it James! It is a retreat from the evidence because you don't like where the evidence leads.
Re: The problem of the Self
For the umpteenth time, your model only allows for explaining space in terms of different NNs - patterns between them. The brain cannot know about any reality external to itself and your model has to account for human behaviour with this in mind. As soon as you start talking about "patterns between objects" and "location patterns", your model reduces to one of the brain discerning space from patterns in external reality, as opposed to internal patterns.GrahamH wrote:We don't need a NN representing every bit of space. We only need to associate object patterns with location patterns.
You must account for how the brain conceives of space inside itself!
That, is the problem.
Remember, my claim is that the fundamental problem with models such as yours, is that they depend upon a priori knowledge of the world in order to interact with it. But brains cannot be endowed with a priori knowledge about the world... and the brain would have to assume the existence of said world, prior to assigning external meaning to its own internal states.
So, stop talking about the brain conceiving of space in regard to patterns between 'objects' and their different 'locations'. Do it regarding NNs alone, or else acknowledge the problem.
NNs are material states. They do not have the exact form of the thing/phenomenon that they are responding to. That is, they do not mirror any aspect of the world. It is a fact of pure logic that for NNs to signify aspects of the world, that external meaning has to be applied to them that has no bearing to their own material form. That is, in order to see the world through NNs, those NNs have to be symbolic of that world!You have been urged, many times now, to throw out your purile Logo brick analogy. If you are to comprehend the model all you realy need to do is acknowledge that pattern matching is a function of NNs. If you think of NNs as Lego you can't grasp the key idea, because Lego bricks don't respond to patterns like NNs do. NNs aren't abstract symbols. They are the very thing that recognise objects by their patterns. The semantics is built-in because the 'symbol' recognises the presence of what it represents.jamest wrote: Your 'model' doesn't allow for the perception of 'objects', or their 'location' wrt other objects. Your model only allows for certain brain states in relation to others. You still don't seem to understand that your model about NN states only refers to the world if the brain makes assumptions and inferences about its own internal states that allow for such external considerations. This is what the whole discussion pivots upon!
Look at yourself in the mirror. You don't see a horse or a star or a cake, do you? No. What you see is a human male - that is your material reality. But imagine that you were a NN and that your material state was a response to a cake. How the fugg would you see 'cake' in yourself?
The problem of intentionality (aboutness) was discussed earlier in the conversation. Brain states cannot have built-in correspondence to external objects. That's fubar. External meaning has to be assigned to them, or they can have no correspondence with anything else whatsoever.
It's an irrational concept.It isn't such a difficult concept, James.
When I was a child I was aware that I was interacting with other entities external to myself. Your brain, however, is not allowed the luxury of knowing anything, other than its own internal states.When you were a child and you saw many things for the first time, you didn't need a priori knowledge of what they were to see them. Until you had seen them you could not recognise them for what they were. After you had seen them you could recognise them. Your brain grew connection patterns in response to sensory stimuli such that it would respond when that pattern of stimuli occur again.
Let's just cut to the chase. It's impossible for any brain model to work without its internal states being recognised as responses to external events. That is, the brain needs to have a priori knowledge of 'the external' in order to interact with it. And THAT is why they are all fucked up.
What?! You've just been trying to explain 'space' in terms of objects and their relative locations!CIte an example given by SoS or me that requires an a priori understanding of the world, and I'll show you how you have misunderstood the point.jamest wrote:My God; every time I ask you to explain something specific in relation to your own model, you do so in terms that require an a priori understanding of the world. And yet, you still claim that there is no problem!!!

Correlations were discussed earlier. I explained quite clearly why correlation doesn't equate to cause. Do I really need to go through it all again? Every time I sit on the toilet, I do a number 2. Does this mean that the toilet caused me to empty my bowels?Do you have any explanation for the correlations?jamest wrote:FFS, you and SOS have been swapping links about correlations between brain behaviour and experience, all day - as if corelation = cause. But I've already explained why correlation does not equal 'cause', earlier within the conversation.
I can explain the essence of everything. I don't profess to know how 'it' creates experience.Can you explain how the mind works? Can you account for perceptual illusions? Can you account for how thoughts are formed or where ideas come from?
I am not denying causality. I'm just telling you that correlation doesn't prove causality.Denying causality doesn't cut it James!
No, it's a logical negation of the claim that correlation = causality. You don't have any proof of the brain causing anything.It is a retreat from the evidence because you don't like where the evidence leads.
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The problem of the Self
For the umpteenth+1 time brains do not pop out the womb as a bowl of formless NN's. They have a priori structure by genome. The brain doesn't have a clue about the world at this point but it has everything it needs to find out.jamest wrote:Remember, my claim is that the fundamental problem with models such as yours, is that they depend upon a priori knowledge of the world in order to interact with it. But brains cannot be endowed with a priori knowledge about the world... and the brain would have to assume the existence of said world, prior to assigning external meaning to its own internal states.
Fail again.
Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
-
- Posts: 668
- Joined: Tue Feb 23, 2010 5:05 am
- Contact:
Re: The problem of the Self
jamest wrote: I can explain the essence of everything.

Favorite quote:
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
lifegazer says "Now, the only way to proceed to claim that brains create experience, is to believe that real brains exist (we certainly cannot study them). And if a scientist does this, he transcends the barriers of both science and metaphysics."
Re: The problem of the Self
The brain "cannot know about any reality external to itself" except to the extent that the brain will respond variously to stimuli. The brain doesn't have some conscious property of knowing, it behaves as if it knows. Behaving as if you know amounts to knowing if it accounts for learning, inference and apparently purposeful action. At the level of this topic those things have been accounted for.jamest wrote:For the umpteenth time, your model only allows for explaining space in terms of different NNs - patterns between them. The brain cannot know about any reality external to itself and your model has to account for human behaviour with this in mind. As soon as you start talking about "patterns between objects" and "location patterns", your model reduces to one of the brain discerning space from patterns in external reality, as opposed to internal patterns.GrahamH wrote:We don't need a NN representing every bit of space. We only need to associate object patterns with location patterns.
Your objection is always the same and amounts to "But there is no homunculus in your model! How can a brain know anything without a knower inside it?" We understand your objection and have dismissed it for being irrelevant, having zero explanatory value, and already answered several times.
Where is the problem? Location patterns and object patterns are learned by the brain. Relational patterns between object patterns and location patterns are learned by the brain.jamest wrote:You must account for how the brain conceives of space inside itself!
That, is the problem.
Please spell out your objection.
Nonsense James. No assumption is required, it is is sufficient to live in the pattern of sense stimuli from the world, routed through circuits evolved to learn those patterns. The sensory feedback system guides the learning. That is why we are born not knowing and knowledge grows in us. Our brains don't start as a blank slate, evolution has shaped them to be able to learn the world (things like - spotting faces-patterns)jamest wrote:Remember, my claim is that the fundamental problem with models such as yours, is that they depend upon a priori knowledge of the world in order to interact with it. But brains cannot be endowed with a priori knowledge about the world... and the brain would have to assume the existence of said world, prior to assigning external meaning to its own internal states.
Do you understand how neural networks function as object classifiers/recognisers? Do you comprehend that this is not a symbol manipulation task?jamest wrote:So, stop talking about the brain conceiving of space in regard to patterns between 'objects' and their different 'locations'. Do it regarding NNs alone, or else acknowledge the problem.
Perhaps your difficulty here is that you are accustomed to thinking about consciousness as something divorced from what it is conscious of. You go so far as to deny the world we are conscious of. Are you trying to make us adopt your quirky ideas of C? You don't get to do that James. We are talking about consciousness of a world that we are all conscious of. Denying the world is not countering our model. We can't explain how our model explains your model, because your model is absurd, entirely unsupported, a fiction that explains nothing about human experience.
Don't you see the absurdity of what you are saying James? If a system (brain, robot, whatever) can take on an impression of some object, such that thereafter it correctly identifies the presence of that pattern, it does "mirror an aspect of the world".jamest wrote:NNs are material states. They do not have the exact form of the thing/phenomenon that they are responding to. That is, they do not mirror any aspect of the world. It is a fact of pure logic that for NNs to signify aspects of the world, that external meaning has to be applied to them that has no bearing to their own material form. That is, in order to see the world through NNs, those NNs have to be symbolic of that world!You have been urged, many times now, to throw out your purile Logo brick analogy. If you are to comprehend the model all you realy need to do is acknowledge that pattern matching is a function of NNs. If you think of NNs as Lego you can't grasp the key idea, because Lego bricks don't respond to patterns like NNs do. NNs aren't abstract symbols. They are the very thing that recognise objects by their patterns. The semantics is built-in because the 'symbol' recognises the presence of what it represents.jamest wrote: Your 'model' doesn't allow for the perception of 'objects', or their 'location' wrt other objects. Your model only allows for certain brain states in relation to others. You still don't seem to understand that your model about NN states only refers to the world if the brain makes assumptions and inferences about its own internal states that allow for such external considerations. This is what the whole discussion pivots upon!
If your PC has a finger-print scanner then it has information, from the world, about your finger-print. Deny that all you like, but it will recognise your finger and reject mine, a functional response in the world.
"Ah, but the scanner doesn't even know it's a finger that supplies the data", you may say. You would be right (there is no knowing homunculus). Still, your finger has been learned and is recognised. Swiping it across the sensor does log you on to your computer. It doesn't need to know what fingers or users or computers are, and you don't need to know anything about how such a process is achieved. No human programmer knows what your fingerprint is like. It is pattern recognition learning and having an effect in the world.
No human knows what happens in his own brain, nor how his own mind works, because these are not things within consciousness. We recognise things. Most aspects of mind are variations on multi-layer pattern recognition. Pattern recognition is what NNs do.
On such simple functions vast complexity can be constructed. Perhaps even communication about fingers, people or minds.
WTF do you mean "see cake in yourself"? There is no "cake in yourself", the cake is on the table. The appropriate response is to cut a slice and eat it. Recognise table;Recognise cake;Recognise hunger;Recognise knife;Recognise relative locations of hand, knife and cake; Recognise hand moving toward cake; Recognise hand grasping cake;Recognise cake in mouth - Chew! Recognise taste of cake - say "mmmm nice cake".jamest wrote:Look at yourself in the mirror. You don't see a horse or a star or a cake, do you? No. What you see is a human male - that is your material reality. But imagine that you were a NN and that your material state was a response to a cake. How the fugg would you see 'cake' in yourself?
"imagine that you were a NN" More Cartesian Thinking. Stop this "little man inside" stuff, it won't get you anywhere.
You have no clue about pattern recognition, do you James?jamest wrote:The problem of intentionality (aboutness) was discussed earlier in the conversation. Brain states cannot have built-in correspondence to external objects. That's fubar. External meaning has to be assigned to them, or they can have no correspondence with anything else whatsoever.
jamest wrote:It's an irrational concept.It isn't such a difficult concept, James.


It is highly unlikely that when you were very young you learned your mother's face by "a priori knowledge of what entities external to yourself" are. Babies have to learn that stuff. They learn that mother's face is not like their own hand. They learn that one is always there and more directly controlled and the other goes away, is harder to control and unpredictable. There is plenty of input there to recognise the patterns "part of me" and "not part of me". Babies certainly seem to lack a vast amount of a priori knowledge.jamest wrote:When I was a child I was aware that I was interacting with other entities external to myself.When you were a child and you saw many things for the first time, you didn't need a priori knowledge of what they were to see them. Until you had seen them you could not recognise them for what they were. After you had seen them you could recognise them. Your brain grew connection patterns in response to sensory stimuli such that it would respond when that pattern of stimuli occur again.
Why would a brain need to recognise (all) its responses to external events as responses to external events? They are responses to external events (at least you acknowledge that at last). You really can't set aside the Cartesian Theatre even for a moment, can you?jamest wrote:Your brain, however, is not allowed the luxury of knowing anything, other than its own internal states.
Let's just cut to the chase. It's impossible for any brain model to work without its internal states being recognised as responses to external events. That is, the brain needs to have a priori knowledge of 'the external' in order to interact with it. And THAT is why they are all fucked up.
You sit on the toilet because your brain responds to signals from your bowels. Your bowels move at that time because your brain recognises that the toilet is the appropriate time for bowel movements. Do you think you can shit whenever you choose to?jamest wrote:What?! You've just been trying to explain 'space' in terms of objects and their relative locations!Cite an example given by SoS or me that requires an a priori understanding of the world, and I'll show you how you have misunderstood the point.jamest wrote:My God; every time I ask you to explain something specific in relation to your own model, you do so in terms that require an a priori understanding of the world. And yet, you still claim that there is no problem!!!
Correlations were discussed earlier. I explained quite clearly why correlation doesn't equate to cause. Do I really need to go through it all again? Every time I sit on the toilet, I do a number 2. Does this mean that the toilet caused me to empty my bowels?Do you have any explanation for the correlations?jamest wrote:FFS, you and SOS have been swapping links about correlations between brain behaviour and experience, all day - as if corelation = cause. But I've already explained why correlation does not equal 'cause', earlier within the conversation.
Try denying that cause and effect, but have a clean pair of trousers handy.
A simple "no" will do.jamest wrote:I can explain the essence of everything. I don't profess to know how 'it' creates experience.Can you explain how the mind works? Can you account for perceptual illusions? Can you account for how thoughts are formed or where ideas come from?
Which isn't saying much, is it? What you mean to imply is that empirical research is utter folly. Judging by the fruits of such research that opinion is worthless gibbering.jamest wrote:I am not denying causality. I'm just telling you that correlation doesn't prove causality.Denying causality doesn't cut it James!
"you can't prove causality!" So what?jamest wrote:No, it's a logical negation of the claim that correlation = causality. You don't have any proof of the brain causing anything.It is a retreat from the evidence because you don't like where the evidence leads.
You talk about a cause of experience! "You can't prove causality!"
You have given no "logical negation" of anything, James.
Re: The problem of the Self
The brain only has access to its own internal states. My objection is that you are trying to explain the concept of space from patterns discerned amidst objects and relative locations of these objects, thus you are trying to explain 'space' in terms of its relationship with external entities. Clearly, you are transcending the reality of the brain to do this. That is, you are using external relationships between objects to explain how the brain discerns of 'space'.GrahamH wrote: Where is the problem? Location patterns and object patterns are learned by the brain. Relational patterns between object patterns and location patterns are learned by the brain.
Please spell out your objection.
But what you have to do, is explain how the brain discerns 'space' from the patterns inherent within its own internal states and [possibly] the relative locations, within the brain, of these patterns. That's all the brain has to go on, so that's all your model can do too, in attempting to explain how the brain discerns of space.
The issue is not one of 'learning' or 'processing' or 'computing'. I'm agreed that the brain can recognise 'patterns' or 'order'. The issue is one of what is available to be processed. And in your model, all that is available to be processed, are the brain's own internal states. The brain can only relate NNs to one another. Therefore, any 'conclusions' that the brain makes, could only refer to itself!Nonsense James. No assumption is required, it is is sufficient to live in the pattern of sense stimuli from the world, routed through circuits evolved to learn those patterns. The sensory feedback system guides the learning. That is why we are born not knowing and knowledge grows in us. Our brains don't start as a blank slate, evolution has shaped them to be able to learn the world (things like - spotting faces-patterns)jamest wrote:Remember, my claim is that the fundamental problem with models such as yours, is that they depend upon a priori knowledge of the world in order to interact with it. But brains cannot be endowed with a priori knowledge about the world... and the brain would have to assume the existence of said world, prior to assigning external meaning to its own internal states.
This is a big problem. The brain cannot relate its own internal states to the world unless it knows that its internal states are representative of external phenomena. A simple matter of logic. No matter how complex the brain becomes, it can only ever process its own internal states. And to process them in a way that relates its internal states to external phenomena necessarily requires a priori knowledge of the world's existence. Without such knowledge, the brain is forever limited to processing and learning about itself alone!
This problem can actually be associated to what Kant said about not being able to transcend the phenomenal world, except that in this case, I am saying that the brain cannot transcend itself. Yet, when the brain evaluates its own internal states to be representative of a world external to itself, it does transcend itself. And how, other than with a priori knowledge of that world?
As I keep saying to you, it's a logical problem demanding a logical solution.
You're still not addressing the logical problem. You're just telling me that there are NNs that correlate with external entities. Actually, to be exact, you can only say that there are NNs that correlate with experienced entities - because, of course, we cannot even know if the external world exists. Therefore, to be sure, any confirmed correlations must be about and with the internally conceived world.Do you understand how neural networks function as object classifiers/recognisers? Do you comprehend that this is not a symbol manipulation task?
What I'm trying to do, is show you why materialistic brain models of human behaviour cannot work unless the brain has a priori knowledge of a realm beyond itself. That is, I'm alerting you to a massively significant problem of logic that renders all such models as ineffective.Perhaps your difficulty here is that you are accustomed to thinking about consciousness as something divorced from what it is conscious of. You go so far as to deny the world we are conscious of. Are you trying to make us adopt your quirky ideas of C?
I am not denying the world. I'm denying that the brain could know about it. I'm denying that the brain can effectively interact with the world whilst knowing zilch about that world.Denying the world is not countering our model.
Who is online
Users browsing this forum: No registered users and 4 guests