These are separate issues. Your definition is a version of the same content-bearing definition that all the rest of us use.
Look at your language, cognitive terms such as: model, signal, error, reflects, estimate. These all imply content. A red light may act as a stop signal, or indeed as signalling the port side of a ship; same red light but different signals because conveying different content. The identical spike train may occur at different points A and B in the brain or indeed at the same place at different times , signalling a prediction with one occurrence and signalling an error with the other. The difference is because the content ascribed is different. We may slide between one sense and the other when we use the word brain, but care is needed.
The rest of us are happy to think of predictions as content-ascribing, neuroscientists and roboticists use similar ideas all the time in their models. Often no problems at all for those pursuits, and I as a sometime brain evolver and some sort of engineer have been happy to talk about parts of brains trading in contents — though I am careful to clarify that I am using this as part of a homuncular metaphor that is often useful and sometimes seriously harmful.
Editor’s picks – Embodied Pain: negotiating action
Ask a neuroscientist or a roboticist whether a spike train or a pattern of activation is a signal conveying content and chances are they will say, yes, in my model it is conveying that specific content — which may eg be a prediction here or an error there. They need not worry about Horace too much. You are right that ultimately the notion of prediction requires that the predictions have contents. What I do think, however, is that we can see from small-scale simulated systems such as the Rao and Ballard model of early visual processing that a certain pattern of exchange of top-down and bottom-up signals can implement such a contentful-prediction-involving process without magic.
We can also see what such a process delivers — in that case, the learning of extra-classical receptive fields that track typical statistics in the domain. We can also see how such learning enabled a system to deal with noisy data, fill in gaps, etc. If we then have cause to think that 1 brains could, and perhaps even do, display that pattern, and 2 that the kind of functionality delivered would be adaptively valuable, it becomes worth asking the question of nature — are brains, at least sometimes, issuing cascades of top-down prediction that play that kind of functional role?
One of the goals of those experiments is to determine what kinds of content are indeed being computed, and where, and when. Do you have an alternative route around to suggest, or do you accept this cartoon is an accurate picture of your approach? But notice that I think that to be a real flow of content-bearing signals does not require that the system or any part of it understand them as such so no homuncular fallacy. And so long as you are merely expressing Faith in the Existence of the Unseen Criterion, it sounds more like religion than philosophy to me.
- The Carpet People!
- Experience and Embodied Cognition in Pragmatism.
- Surfing uncertainty : prediction, action, and the embodied mind.
- Embodied Cognition.
- Camera Phone: How to Make Pro Photos!
- Night Bus come soon to us (Bed reading Book 3).
- Research, news and views from Philosophy, Psychology & Language Sciences!
One caveat. Here, my exchanges with Inman give me pause. For Inman — perhaps unlike Jelle and Julian? They need not worry about Horace [the homunculus] too much.
- Potsherds of Life; Poems and Readings!
- Mind in Action;
- Reading William Faulkner: Go Down, Moses & Big Woods (Literature Insights).
- Outline Studies in the Old Testament for Bible Teachers.
- La Sophrologie : Pour un quotidien positif et sans stress ! (IX.MIN.GUI.ECOL) (French Edition)!
- Reading Assessment in an RTI Framework!
- Successful But Something Missing: Daring to Enjoy Life to the Full.
Certainly, I never meant to offer a philosophical theory of content. What we do need and have are useful systemic understandings that treat some aspects of downwards influence as predicting the states of certain registers, and some aspects of upwards influence as signalling error with respect to those predictions. Insofar as those models prove useful, and can be mapped onto neural activity, I am happy and I think warranted to speak of predictions and prediction error signals in the brain.
Surfing uncertainty: prediction, action, and the embodied mind
That would be homuncular indeed. I feel I am pointing at the elephant in the room and you are so busy looking at my finger you are blind to the elephant. In Dennettian terms, they should always be able to discharge the homunculus. They usually but not always do this sensibly, aware of the limitations of such an approach — ultimately judged by their different tests-for-success. The cartoon, I hope, makes it clear how stupid that would be. This implies that — for the purposes of task A — it is impermissible to posit as part of the explanation some homunculus that needs discharging, that is begging the question.
G Andy insists on a theory with a notion of prediction as a central plank, where Inman claims either part of or the whole brain is being treated as a cognitive homunculus doing the predicting.
I really get the impression, when you suggest uncertainty as to what issues divide us, that you do not appreciate the depths of the disagreement! It is the difference between a pre-Copernican and a post-Copernican conception of cognition. I think this requirement is visibly met by PP. There remains, I agree, an empirical question about whether the brain works in anything like the way any of those systems do.
Work by Bastos et al and others is addressing that issue. This means that PP, whether right or wrong, is also allowing us to ask some pointed questions of the brain. Yes, you are right that at the centre of our differences are different interpretations of what it might mean for Mechanism M eg a brain to have a predictive component. I have challenged you to provide such an interpretation, in the form of an operational definition or operational test that decides which of mechanisms M1, M2, M3… are or are not predictive in your sense, and you have not provided one.
You seem to equate my view as somehow believing brains work by magic — not so! For instance, an example of prediction from early cybernetics would be anti-aircraft fire — roughly, predict where the plane will be in 5 seconds time and aim the gun to project a shot to the right place at the right time.
I reckon that with some assistance I could design something like that, it would be a mechanism fully explicable with no magic. But it would contain predictive parts in the sense that I had designed it that way, and the labels on my sketch plans would be clear evidence.
Now actually, as a onetime evolutionary roboticist, I might have tried to evolve the mechanism rather than design it, in which case there would be no such labels; but still no appeal to magic or ectoplasm, just physics. Your claim — as far as I can see — is that you can take 3 mechanisms my designed anti-aircraft gun, my evolved anti-aircraft gun, and the lizard brain, say , all without labels, and categorise them each as either containing or not containing a predictive component — without appeal to any homunculi.
But my primary issue here is philosophical, not pragmatic. A test that says anything and everything contains predictions is of course vacuous — I suspect your difficulties may be at that end of the scale. Prediction would just be one among many roles that mental imagery takes on vis.
No need for sub-personal producers and consumers, and definitely no need for symbols shuffling. I think the neural re-use scenarios are a very good fit with the PP perspective. Given some new task, existing prediction machinery adapted to another domain might prove the best early means of reducing organism-salient PE. One thing I have been wanting to ask you about is the relationship between predictive coding and Bayesian accounts of how the brain works. In the literature, one can get the impression that the two ideas amount to the same thing.
Prediction as hallucination?
I am not sure whether you will touch on this issue in future posts. If you do so here or in the book, I look forward to reading your take on this issue. If the two approaches can come apart, then one can also put pressure on the idea of a single, overarching brain mechanism. It could be that the brain is, in some sense, in the general business of reducing error, but it might do so by using different tricks in different domains. Wonderful to hear from you. They describe an optimal way to crunch together prior knowledge and new e. It is well-known that brains cannot, in most realistic cases, implement full-blooded Bayesian inference.
Still, there are various approximations available, and perhaps the brain implements one or more of them. The upwards and downwards message-passing regime posited by PP would be an apt enough vehicle for some such approximation. However, all that leaves untouched what I see as the major challenge. This is where unsupervised multi-level prediction-driven learning does special work. That puts an importantly different spin on the story.
The upshot is that the value of the mapping from familiar Bayesian stories onto these dynamical, self-organizing, action-centric accounts is to me, and more on some days than others! Hi Andy and Inam, Thanks for these interesting posts. Why not also take into account the fact that predictions and actions do not exist by themseves, that they are not for free? Any agent animal, human, robot predicts and acts for some given reasons related to its nature.
Surfing Uncertainty: Prediction, Action, and the Embodied Mind - Andy Clark - Google книги
Animals predict and act to stay alive. Humans do it to be happy. And artificial agents do it in conformance with what they have ben designed and programmed for. And this can be worded in terms of internal constraint satisfaction. Humans: look for happiness, limit anxiety valorize ego, …Artificial agents: as designed and programmed for. Constraints being intrinsic to the agent for living ones and derived from designers for AAs. Agents generate meanings when receiving information that has a connection with the constraint they are submitted to.
The generated meaning is used by the agent to implement an action satisfying the constraint. Meaning generation looks to me as having a place when addressing prediction, action and embodiement. Would you agree?
All the best Christophe. This sounds right to me.