jamest wrote:What is true, is that I'm doing my own thinking in assessing this issue.
And your thinking in assessing this issue is, shall we say, narcissistic. If you
cannot understand the scientific model, it doesn't mean, perforce, that the scientific model is
wrong. It just means that you aren't a scientist.
The trick for a scientist is not to show that a scientific model is "wrong", but to show that another model is "better", i.e., that it explains more data. This is done statistically, by citing various data that the model explains by recapturing them.
There's a scientific model of a tree, and there's a scientific model of the brain, and there's actually a model of how the brain interacts with the tree. Trying to get at the semantic meaning, or metaphysical meaning of "tree" is something of a distraction, because the scientific model of the tree and its relation to the brain is not really related to the way our symbols, such as a picture of a tree, or the printed word act as semaphores.
For a philoslopper, the problem is more difficult, because you don't even believe "better" can be measured when an "explanation" is being considered. You seem to be suggesting that there is an "argument" that invalidates all of science, i.e., that the assumptions science is making (metaphysically) are wrong. But there is no basis to choose a metaphysical model of reality.
All there is are models, and the scientific model of brain function actually explains quite a lot. You should try it first, before being so dismissive. Then, when you appreciate it, you'll always have a chance to present your own model in an orderly fashion, and we can all see what it "explains". Then you can go into some wibbling about semantics and see whether there's anything that really must be explained about it that visual interaction with semaphores needs to clean up. But please wait to do that, because there's even some brain research on the organism's capacity to detect and translate semaphores. There's a lot of brain research, James. Either learn something about it, or admit you have not the desire.
jamest wrote:I've highlighted another problem that I don't think has been exposed before - the necessity of the brain to make assumptions about an 'external reality', prior to assigning external meaning to its own internal NNs.
Try to focus for a moment or two, James. We're discussing a model of how the brain interacts with empirical objects, and not about how the brain interacts with concepts. The concepts themselves are brain activity, and how the brain interacts with empirical sensory information is part of a model. The model does not traffic in "external reality". Leave it out for the moment. We are just discussing an interaction model, not the ontology of the components of the model. That's why science progresses, and philosophy obfuscates, as Atkins would say.