Santa Fe Institute Collaboration Platform

COMPLEX TIME: Adaptation, Aging, & Arrow of Time

Get Involved!
Contact: Caitlin Lorraine McShea, Program Manager, cmcshea@santafe.edu

Cognitive Regime Shift II - When/why/how the Brain Breaks/NikolausKriegeskorte

From Complex Time

Notes by user Nikolaus Kriegeskorte (Columbia Univ.) for Cognitive Regime Shift II - When/why/how the Brain Breaks

Post-meeting Reflection

1+ paragraphs on any combination of the following:

  • Presentation highlights
  • Open questions that came up
  • How your perspective changed
  • Impact on your own work
  • e.g. the discussion on [A] that we are having reminds me of [B] conference/[C] initiative/[D] funding call-for-proposal/[E] research group

Impact of the meeting on my research

The main thing I took away from the meeting was that robustness to damage is property of neural networks that is important from a theoretical as well as an applied perspective. I'm motivated now to think about revisiting the old method of lesioning and perturbing with noise neural network models, so as to better understand the degree to which they are sensitive to local damage and to displacement of their dynamic trajectories.

I would also like to explore how robustness to damage and noise relates to robustness to adversarial attacks and to robust generalization performance in novel situations (i.e. changes to stimuli or more generally the behavior of the environment). Before this meeting, I would have conceptualized these different forms of robustness as largely unrelated. I would have thought that the commonality suggested by the fact that they can all be considered forms of robustness is largely misleading.

Now I think there may be deep links between robustness to damage, internal noise, changes of the body, and changes of the stimuli and environmental behavior.

Some half-formed hypotheses:

  • Neural noise may help a neural network learn solutions that are robust in a variety of ways.
  • Similarly, changes of environmental behavior (including stimulus statistics) during learning may help a neural network learn solutions that are robust in multiple ways.
  • Predictive completion of partial neural representations (in symmetric, i.e. energy-based, or asymmetric networks) may provide robustness through redundant representation as well as enabling unsupervised learning through selfsupervision.

I'm quite keen to explore these ideas further.

Important open questions

• How can states close to criticality serve computation in neural networks?

• Might robustness to damage result from the same mechanism that enables robust generalization to new domains? (And what is that mechanism?)

Notes on the meeting

The discussion, though inspiring, was a little more wide-ranging than is optimal for making concrete progress.

It might good in future meetings to focus, and to more clearly and specifically define the topics of sessions and particular presentations.

Reference material notes

Some examples:

  • Here is [A] database on [B] that I pull data from to do [C] analysis that might be of interest to this group (insert link).
  • Here is a free tool for calculating [ABC] (insert link)
  • This painting/sculpture/forms of artwork is emblematic to our discussion on [X]!
  • Schwartz et al. 2017 offers a review on [ABC] migration as relate to climatic factors (add the reference as well).

Reference Materials