Cognitive Regime Shift II - When/why/how the Brain Breaks/ArtemyKolchinsky
From Complex Time
Notes by user Artemy Kolchinsky (SFI) for Cognitive Regime Shift II - When/why/how the Brain Breaks
1+ paragraphs on any combination of the following:
- Presentation highlights
- Open questions that came up
- How your perspective changed
- Impact on your own work
- e.g. the discussion on [A] that we are having reminds me of [B] conference/[C] initiative/[D] funding call-for-proposal/[E] research group
Two scattered thoughts after the meeting:
- I very much enjoyed Jacopo's brief summary of noise-driven critical transitions & critical slowing down (that is, noise-driven escape across a barrier, from one metastable state to another). Jacopo also described a view of aging as the gradual lowering of the barrier, which results in a gradual increase of the probability of crossing the barrier. To me, it is the only real contender for a universal theory of aging and critical transitions in complex systems that is mathematical and predictive. Unfortunately, it is not at all clear that this theory works well for brain aging or breakdown. In particular, it is not clear that its predictions (about increased variance and/or increase autocorrelation timescales) is what actually occurs in brain aging / decline. Also, it describes aging as an increase in the instantaneous probability of break down (total loss of function). This is quite different from how many people see aging, as a gradual decline of current function. It seems important to make this distinction.
- I very much enjoyed Niko's talk, showing that correlations uncovered by deep nets are correlated with the representations that are used in the human visual system. However, I am sympathetic to the critique made by Ross, that these correlations do not really provide us with a theory of how e.g. the human visual system functions. Rather, I see the implications of this work (as well as some other deep learning work) for our understanding of cognition to be the following lesson: simple distributed architectures + learning by gradient descent (or some other very simple learning rule, like gradient) can be shockingly effective. In this sense, these theories are similar to previous "emergentist" theories, such as Darwin's or Adam Smith's, in which a simple iterated algorithm produces amazing outcomes... but while the algorithm is easy to understand, the outcomes can be incredibly intricate and complex. This suggests that, similarly to how evolutionary biologists must study environments and niches to understand adaptations uncovered by natural selection, we might have to study the structure of natural environments to understand the representations and mechanisms uncovered by simple learning rules.
Reference material notes
- Here is [A] database on [B] that I pull data from to do [C] analysis that might be of interest to this group (insert link).
- Here is a free tool for calculating [ABC] (insert link)
- This painting/sculpture/forms of artwork is emblematic to our discussion on [X]!
- Schwartz et al. 2017 offers a review on [ABC] migration as relate to climatic factors (add the reference as well).