It's #ReadingMonday, and today I found an interesting paper on #predictiveprocessing models of the #brain
- "Scientists Invent New Hypotheses, Do Brains?"
- by Nir Fresco and Lotem Elber-Dorozko - https://onlinelibrary.wiley.com/doi/10.1111/cogs.13400
The paper reveals some of the processes in a modeler's mind that guide how we build models, and in particular predictive processing (PP) models.
The Bayes theorem is one thing, but the most interesting part when comparing PP models is how new hypotheses are challenged as they enter the generative model. In other words, what matters is how well the priors you include in your model fit the experiments.
To simplify, they identify two classes of priors, basically bottom-up and top-down, each with its pros and cons: "Cognitive-level models do not specify how they can be implemented in the brain and how the learning domains in these models can be learned. In contrast, neurobiological architecture-inspired models, although using neuronal-like architecture with learning starting from random weights, cannot account for important aspects of human cognition".
One underlying "meta-prior" is the existence of a hierarchy, which implies that there is both a feed-forward and a feed-back pathway, and that where they meet there are respectively "bottom-up or forward prediction errors" and "top-down or backward predictions". They conclude by suggesting that incorporating priors of both types may be a way to move the field forward.