Santa Fe Institute Collaboration Platform

COMPLEX TIME: Adaptation, Aging, & Arrow of Time

Get Involved!
Contact: Caitlin Lorraine McShea, Program Manager,

Cognitive Regime Shift II - When/why/how the Brain Breaks

From Complex Time
Revision as of 20:47, May 21, 2020 by AmyPChen (talk | contribs)

(diff) ← Older revision | Approved revision (diff) | Latest revision (diff) | Newer revision → (diff)

Category: Application Area Application Area: Aging Brain

Date/Time: November 12, 2019 - November 13, 2019

Location: Santa Fe Institute

Upload a group photo


  • John Krakauer (Johns Hopkins Univ./SFI)

  • Steven Petersen (Washington Univ.-St. Louis)

  • Meeting Highlight

    Upload an audio file

    Click each agenda item's title for more information.
    Tuesday, November 12, 2019
    8:15 am - 8:30 am Day 1 Shuttle Departing Hotel Santa Fe (at lobby) to SFI
    8:30 am - 9:00 am Day 1 Continental Breakfast
    9:00 am - 9:30 am Introductory Remarks - David Krakauer (SFI), Steven Petersen (Washington Univ.-St. Louis), John Krakauer (Johns Hopkins Univ./SFI)
    9:30 am - 10:30 am Collective Computation and Critical Transitions - David Krakauer (SFI)
    10:30 am - 11:30 am Robustness of Brain Function - Nihat Ay (Max Planck Institute/SFI)
    11:30 am - 12:30 pm Task-performing neural network models enable us to test theories of brain computation with brain and behavioral data - Nikolaus Kriegeskorte (Columbia Univ.)
    12:30 pm - 1:30 pm Day 1 Lunch
    1:30 pm - 4:30 pm Round Table Discussion 1: The nature of compensation and cognitive reserves

    Each round table discussion will start with self-introductions of participants listed below. The self-introductions should include how the questions participants proposed prior to the meeting (see p.3-5) map onto the round table topic.

    Nihat Ay (Max Planck/SFI)
    Roberto Cabeza (Duke Univ.)
    Randy McIntosh (Univ. Toronto);
    John Krakauer (Johns Hopkins/SFI);
    4:30 pm - 5:00 pm Day 1 wiki platform work time
    5:15 pm Day 1 Shuttle Departing SFI to Hotel Santa Fe
    7:30 pm (Optional) SFI Community Lecture at the Lensic Performing Arts Center by Melanie Mitchell: Artificial Intelligence: A Guide for Thinking Humans

    Note that Melanie will be signing her new book with the same title at 6:15 - 7:15 PM in the Lensic lobby; the lecture can also be streamed live via SFI's YouTube page and the SFI Twitter page

    Wednesday, November 13, 2019
    8:30 am - 9:00 am Day 2 Continental Breakfast
    9:00 am - 9:30 am Recap from Day 1
    9:30 am - 12:30 pm Round Table Discussion 2: The multiple scales of damage – from cells to networks

    Each round table discussion will start with self-introductions of participants listed below. The self-introductions should include how the questions participants proposed prior to the meeting (see p.3-5) map onto the round table topic.

    Sidney Redner (SFI)
    Steve Petersen (WA Univ. – St Louis);
    Jacopo Grilli (ICTP);
    Richard Frackowiak (Ecole Polytech);
    Dietmar Plenz (NIH);
    Jack Gallant (UC Berkeley);
    Artemy Kolchinsky (SFI)
    12:30 pm - 1:30 pm Day 2 Lunch
    1:30 pm - 4:30 pm Round Table Discussion 3: Models for transforming circuits (neural) into tasks (psychology)

    Each round table discussion will start with self-introductions of participants listed below. The self-introductions should include how the questions participants proposed prior to the meeting (see p.3-5) map onto the round table topic.

    Russ Poldrack (Stanford Univ.);
    Viktor Jirsa (Aix-Marseille Univ.);
    Caterina Gratton (Northwestern Univ.);
    Paul Garcia (Columbia Univ.);
    Nikolaus Kriegeskorte (Columbia Univ.)
    David Krakauer (SFI)
    Ehren Newman (Indiana Univ.)
    Susan Fitzpatrick (JSMF)
    4:30 pm - 5:00 pm Day 2 wiki platform work time
    5:15 pm Cocktail
    6:00 pm - 7:30 pm Group dinner
    7:30 pm Day 2 Shuttle Departing SFI to Hotel Santa Fe

    Add an Agenda Item[edit source]

    Meeting Synopsis

    In this second working group on the brain we shall build on some of the foundational discussions raised in the first meeting. These included significant debate around the merits of correlation vs. causation, indicators or indices of the loss of function, and the relationship among levels required to explain failure - from the genetic and cellular to the cognitive and behavioral (including states such as sleep and anesthesia). Mechanisms of loss of function discussed will range from cell to network drop-out. We shall return to some of the key questions that motivated the first meeting with a better sense of the limitations of data sets and tools of analysis. This includes the synthesis and integration of neurology with several areas of complexity science to include adaptation and robustness in system aging, early network-based indicators for risk factors, the application of criticality and related tipping point to regime shifts, the measurement of long-range order and disorder across the brain, and methods for analyzing collective dynamics in the aging and diseased brain. 

    Additional Meeting Information

    We will model the format of this WG loosely after the successful Dahlem Konferenzen. In preparation, we ask everyone to please (1) nominate one or two reference(s) bearing on the WG synopsis (with a note on why you chose each), and (2) come up with one or two question(s) that you would like to discuss with the group at the meeting. 


    • David Krakauer: What is the connection of physiological robustness in the nervous system (why brains do not break) to the information processing, computational, or functional properties of the brain? In other words are diseases of the brain deficits of information processing or general injuries shared across any densely connected tissues or organ?
    • Russ Poldrack: There are currently two very different approaches to understanding brain networks, which have very different implications for the impact of damage to the network.  One approach, which arises from computational neuroscience, has focused on the nature of the computations that are performed by specific brain circuits or regions.  This approach is well exemplified by the recent work from Yamins, DiCarlo, and others that has used task-driven neural network models to predict neuronal signals.  A second approach, which falls under the blanket term of “network neuroscience”, has applied generic methods for understanding networks and complex systems. Whereas the former approach focuses on the differences in the kinds of computations that are performed by different networks, the latter largely treats the function of the individual network elements as interchangeable - for example, graph theoretic methods that characterize function in terms of concepts such as path length generally do not care which specific nodes fall on the potential paths.  Understanding the effects of damage on neural computational will almost certainly require an understanding of how to integrate these two perspectives, and the people at this meeting are well placed to think about this issue.
    • Richard Frackowiak: (1) For clinicians the value of a generalised model is measured by its utility in individuals. If a neuroscientist claims a lucid theory of the function of some aspect of the brain is explanatory, because it fits a model with half a dozen dimensions derived from fifty people, what do we tell patients if the model does not fit their data and fails in prognostic prediction? That they are noise, or perhaps suffer from a defect of some other function? These would be acceptable answers only if more complex models showed inferior predictive power, then the unmodelled variance would indeed plausibly represent noise. Once a model with better and more generalisable predictive power is found, what was previously considered noise is now explained. Should the resultant complexity of the explanation (not intuitively understandable) be a barrier to its utility? Clinically, generalisable individual predictive performance always trumps lucidity. Shouldn’t the clinician’s primary concern be the scientist's too? (2) To understand where the brain breaks and how to treat it when it breaks we will need high resolution views of all aspects of the brain - maps and models relating everything - but we keep skirting this necessity. Have advances in data management and analytical informatics resulted in a radical change in the scientific method through an ability to generate hypotheses from data rather than by the intuition of illuminated individuals. Is this a true statement at the current state of play? Is a “theory of the brain” a realisable project and if so, how would it help the understanding of brain breakdown in psychiatric and neurodegenerative disorders? Though an initial aspiration of the EU’s Human Brain Project, that focus has dissipated but remains a challenge for a few laboratories. Is reigniting this ambition massively a realisable priority for those interested in understanding the complex organisation of the human brain and how it responds to injury and degeneration? A further major challenge is for scientists to understand the issues faced by clinicians more deeply.
    • Roberto Cabeza: How does the concept of robustness relate to the concepts of reserve, maintenance, and compensation in the domains of aging, dementia, and brain damage?
    • Viktor Jirsa: What is information processing in an oscillatory network and how does it link to human behavior? Said differently, in brain disease, why do certain parameter changes of the brain network sometimes affect human behavior and sometimes leave it untouched?
    • Randy McIntosh: Can we find a way to make a principled distinction between clinical deficits are come from 1) uncovering a hidden capacity of the system, 2) a maladaptive response to injury/disease, or 3) a primary response to loss (e.g., focal lesion)?
    • Susan Fitzpatrick: There has been a tendency to over-constrain the way we study neurological disorders- influenced in part by the molecular revolution.  The risk of such approaches is that identified and over- targeted local perturbations that become the focus of searches for treatment might not matter (and certainly not “fix” ) because of adaptations such that circuits and networks remain functional. Until of course, they have moved further and further from a healthy state that the cliff looms. Targeting networks as the level of intervention using very crude approaches could actually have ameliorative effects (think ketogenic diet for epilepsy) but might lead to a different dilemma – under-constraining our knowledge and impeding progress. How do we get the size of the space for intervening in complex adaptive systems “right?”  
    • Jacopo Grilli: Do brains break in the same way? From the theory of large deviations, we know that very rare events are likely to occur consistently in the same way. If aging and neurological disorders are the results of a regime shift, how many regimes there are? Two or many? How does this depend on the level of coarse-graining? How much are the transitions between these regimes replicable?
    • John Krakauer: Network approaches are largely anti-modular and atheoretical. There seems to be a tension between conceiving the brain as computationally/algorithmically modular but implementationally distributed at least when it comes to cognition in cortex. The mapping between these two tends to consist of correlations between network metrics and task/behavioral variables. It is not clear how informative this is. Is it?
    • Jack Gallant: (1) All models of human brain function are fundamentally limited by the sensitivity of brain measurement devices, the number of stimulus and task conditions sampled in a study, and the number of and types of individuals sampled. Given these constraints, how can we optimize experimental design and modeling so as to produce medically relevant and actionable information for individuals? (2) Currently most models of the human brain have only been validated in terms of statistical significance at the group level. Few current models provide individualized predictions, and fewer still test generalization outside the conditions used to fit the model. How well does a model have to predict and generalize to an individual's daily life before it is useful for medicine and for other applications?
    • Caterina Gratton: Most fMRI studies (in the domain of aging as well as healthy young adults) find only relatively small relationships between brain measures and behavior. What theories or methods can we develop to improve this link?
    • Paul Garcia: Temporal judgment can be altered during sleep, anesthesia, meditation, and mind-wander. What is the relationship between time perception, attention, and consciousness? Since working memory is often affected in delirium and dementia, is a broken brain unable to recognize mind wander? As we age do we become more self-reflective or less? What are the roles of volition, sentience, and agency in  experiencing time? Is temporal judgment a uniquely human phenomenon?  
    • Dietmar Plenz: Does normal brain function during wakefulness equate with a single dynamical state, e.g. critical dynamics, from where diseases explore mutually orthogonal, low-dimensional trajectory away from this state?
    • Steve Petersen: Will resting state correlations be useful for understanding complex systems effects in neurodegenerative disease?
    • Artemy Kolchinsky: The brain exhibits both redundancy (some functions can be interchangeably carried out by different components) and synergy (some functions require multiple components to operate in a coordinated manner). It is unclear how to assign functions to individual components in the presence of redundancy and synergy. How (and why) does the level of redundancy and synergy in the brain differ in comparison to other biological and technological systems? Does it change as we consider the brain at different scales? Does the level of redundancy and synergy characterize how a system will ultimately fail?   
    • Ehren Newman: Taking seriously the idea that complex systems exist in their own right leads to the idea that functional failure can result from degeneration at the systems-level without clear connection to individual constituent processes. How does a hypothesis that exists at this level survive in a scientific community driven first and foremost by reductionism and demands silver-bullet solutions to neurodegenerative disorders? Practically, what empirical data would prove the necessity of a systems-level perspective over a reductionistic one? To ask this question another way, given the multiple levels at which a problem can be studied (e.g., in neuroscience: organismal > systems > cellular > molecular > genetic) is there a general approach to empirically establish the level at which a phenomenon of interest (e.g., Alzheimer’s disease) is most clearly resolved? If functional failure were proven to result from systems-level degeneration without clear links to individual constituent processes, thus making individual molecular targets tangentially relevant, what treatment approaches hold the greatest promise?   
    Abstracts by Presenters

    Nihat Ay (Max Planck Institute/SFI) - Robustness of Brain Function[edit source]

    The presentation will review core concepts of a theory of network robustness, initially proposed together with David Krakauer. This theory is concerned with the robustness of function, for instance brain function, with respect to structural perturbations. It suggests design principles and adaptation mechanisms for the maintenance of function. The relevance of the theory in relation to brain architectures will be outlined. In particular, the trade-off between parsimony and robustness in motor control will be discussed, thereby drawing connections to the field of embodied intelligence.

    Nikolaus Kriegeskorte (Columbia Univ.) - Task-performing neural network models enable us to test theories of brain computation with brain and behavioral data[edit source]

    The goal of computational neuroscience is to find mechanistic explanations of how the nervous system processes information to support cognitive function and behavior. Deep neural networks (DNNs), using feedforward or recurrent architectures, have come to dominate several domains of artificial intelligence (AI). As the term “neural network” suggests, these models are inspired by biological brains. However, their units are rate-coded linear-nonlinear elements, abstracting from the intricacies of biological neurons, including their spatial structure, ion channels, and complex dentritic and axonal signalling dynamics. The abstractions enable DNNs to be efficiently implemented in computers, so as to perform complex feats of intelligence, ranging from perceptual tasks (e.g. visual object and auditory speech recognition) to cognitive tasks (e.g. language translation), and on to motor control tasks (e.g. playing computer games or controlling a robot arm). In addition to their ability to model complex intelligent behaviors, DNNs have been shown to predict neural responses to novel sensory stimuli that cannot be predicted with any other currently available type of model. DNNs can have millions of parameters (connection strengths), which are required to capture the domain knowledge needed for task performance. These parameters are often set by task training using stochastic gradient descent. The computational properties of the units are the result of four directly manipulable elements: (1) functional objective, (2) network architecture, (3) learning algorithm, and (4) input statistics. The advances with neural nets in engineering provide the technological basis for building task-performing models of varying degrees of biological realism that promise substantial insights for computational neuroscience.  

    Post-meeting Summary by Organizer[edit source]

    Coming soon.

    Additional Post-meeting Summary by Organizer

    Post-meeting Reflection by Presenter

    Steven Petersen (Washington Univ.-St. Louis) Link to the source page[edit source]

    I found NIkos's presentation enlightening and on point. I will certainly go to his primer article. I think the imbalance between abstraction and empiric work was strong. There was an uncomfortable level of abstraction for me. I am not sure my perspective has changed too much, because of this. I would have liked to spend more time on the questions Russ raised at the end regarding ways to "get things together".

    log in to comment

    David Krakauer (SFI) Link to the source page[edit source]

    Much of the emphasis was placed on describing the necessary basic principles, models or data, for describing brain functions.

    These included:

    1. Resting state correlations from imaging data
    2. Behavioral psychological experiments
    3. Local field potentials
    4. Deep neural networks
    5. Information theoretic formalisms.

    Much emphasis was placed on either justifying or discovering appropriate levels for prediction and explanation. On this topic;

    1. Is there a preferred level based on fundamental principles?
    2. How to reconcile computational models (with strong time separation) with dynamical systems models (with a spectrum of time scales)
    3. How to present and justify theoretical frameworks with many free parameters - theory for complex systems (in contrast to mere complication as in physics).
    4. How to triangulate among levels of description

    My own question dealt with the general problem: does the fact of the brain as a computational organ imply distinct regularities in the way in which it breaks?

    One approach to this would be to ask about:

    1. Robustness and adaptability
    2. Critical transitions: order disorder regimes
    3. Cascading failure and percolation.

    This triplet provides a possible informal coordinate system in which to situate a system to include the brain. The rather unique scale and connectivity and general function of brain might suggest that it sit near a critical point, balanced between robust and adaptive regimes.

    log in to comment

    John Krakauer (Johns Hopkins Univ./SFI) Link to the source page[edit source]

    Very much enjoyed the talks that were more theoretical/philosophical - David Krakauer, Nihat Ay, Niko Kriegeskorte, Artemy Kolchisnky.

    Awareness that had a vaguer notion of brain breakage and failure than I previously realized. That embodiment perspective at odds with DNN. We Still have not reconciled networks, dynamical system and representational views. Distributed cognition a but empty.

    I think there need to be more meeting like this - very useful for neuroscientists.

    Impact on my own work likely through new collaborations.

    log in to comment

    Nikolaus Kriegeskorte (Columbia Univ.) Link to the source page[edit source]

    Impact of the meeting on my research

    The main thing I took away from the meeting was that robustness to damage is property of neural networks that is important from a theoretical as well as an applied perspective. I'm motivated now to think about revisiting the old method of lesioning and perturbing with noise neural network models, so as to better understand the degree to which they are sensitive to local damage and to displacement of their dynamic trajectories.

    I would also like to explore how robustness to damage and noise relates to robustness to adversarial attacks and to robust generalization performance in novel situations (i.e. changes to stimuli or more generally the behavior of the environment). Before this meeting, I would have conceptualized these different forms of robustness as largely unrelated. I would have thought that the commonality suggested by the fact that they can all be considered forms of robustness is largely misleading.

    Now I think there may be deep links between robustness to damage, internal noise, changes of the body, and changes of the stimuli and environmental behavior.

    Some half-formed hypotheses:

    • Neural noise may help a neural network learn solutions that are robust in a variety of ways.
    • Similarly, changes of environmental behavior (including stimulus statistics) during learning may help a neural network learn solutions that are robust in multiple ways.
    • Predictive completion of partial neural representations (in symmetric, i.e. energy-based, or asymmetric networks) may provide robustness through redundant representation as well as enabling unsupervised learning through selfsupervision.

    I'm quite keen to explore these ideas further.

    Important open questions

    • How can states close to criticality serve computation in neural networks?

    • Might robustness to damage result from the same mechanism that enables robust generalization to new domains? (And what is that mechanism?)

    Notes on the meeting

    The discussion, though inspiring, was a little more wide-ranging than is optimal for making concrete progress.

    It might good in future meetings to focus, and to more clearly and specifically define the topics of sessions and particular presentations.

    log in to comment
    Post-meeting Reflection by Non-presenting Attendees

    Jacopo Grilli (ICTP) Link to the source page[edit source]

    I am stuck with a picture of aging and collapse, motivated by catastrophic shifts in ecology, which simply takes the form of a saddle-node bifurcation. A functional and dysfunctional system are separated by some energy barrier. Aging (somewhat by definition) corresponds to decreasing energy barrier height (and therefore increasing probability of transition). This (at this level tautological) view comes with two interesting consequences:

    - (critical) slowing down: the typical timescale at which fluctuations relax increases over time

    - in multidimensional system there is an effective one dimensional trajectory describing collapse

    The latter point, suggests high reproducibility in collapse trajectories. At what scale this framework is useful is unclear to me. At the coarser scale, when only two states exist (functional and not functional) the only thing that matters is transition probability (the when, and there is no how and why). At that scale bridges and brains fail in the same way (as lifetime distributions sort of match). I am very confused about the confusion around the scale(s) at which we want to study aging and breaking of brains.

    I found extremely interesting the discussion of machine learning / neural networks as toy models of representation and/or learning in brains.

    log in to comment

    Caterina Gratton (Northwestern Univ.) Link to the source page[edit source]

    I started my section with the premise that when we discuss the brain "breaking", we often operationalize this in terms of changes in complex behaviors, and that these behaviors may be subserved by large-scale systems of the brain. Much of the work to date has focused on the typical average structure of these systems. I showed some recent work we've done aimed at moving these analyses to the individual level, and discussed some observations we've made based on this.

    (1) I showed that functional network measurements (at rest) can be quite reliable even in single individuals, given enough data

    (2) I showed some data demonstrating that functional network measurements are dominated by stable factors including group commonalities and individual features. Task-state and day-to-day variability is also present, but much smaller in scale.

    (3) I discussed our characterizations of punctate locations of individual differences in functional networks, showing that these locations are present across repeated recordings, relate to altered function, and individuals cluster based on the forms of variants they exhibit. While these individual differences explain some (gross) behavioral differences, the variance they explain is very small. I left off with a question to the group of why this might be: why do we see relatively stable behavior in the face of some large individual differences in brain organization.

    Discussion centered on how we might think about these effects in the context of distributed organization (or not), to what extent these effects can be overcome by functional alignment that does not assume spatial correspondence, and whether manifolds might be a way of modeling variation in brain function that can lead to a similar functional outcome. We also discussed whether behavior has been measured well enough yet, or if we've been too non-specific in our functional assessments.

    General meeting reflection: There were some interesting discussions of multiple different scales and ways of thinking about the brain. I would have liked to have seen a little more cross-talk integration, and/or thoughts about practical directions on which to move forward. How can we better unite models with data? What are the right types of data to collect and theories to test?

    log in to comment

    Dietmar Plenz (NIH) Link to the source page[edit source]


    Hard to prioritize as every talk expanded my perspective and triggered new associations.

    I enjoyed two talks in particular – David’s introduction into ‘breaking’ which provided are nice meta-overview into brain dysfunction outside the usual context of development and aging. Refreshing and lots of food for thoughts.  The triade of ‘breaking/perturbation, critical transition, and cascading failure’ nice transitioned into three more concrete directions, which I would loved to have explored more in that workshop:

    ‘breaking = scale of anatomy’

    ‘critical transition = brain state’

    ‘cascading failure = developmental disturbances’?

    Also very much enjoyed Nikolaus’s overview talk and insight into convolutional deep networks.  Very clear, transparent and a great platform from which discussions emerged.

    My favorite open question:

    What is the computation mechanism/dynamics at the network level ? Move away from correlation analyses.

    Change in perspective: I would like to move away from the discussion of imaging results and move more towards the nature of computation.

    Impact on my own work: Converging ideas on collective decision making and coherence potentials.

    log in to comment

    Susan Fitzpatrick (JSMF) Link to the source page[edit source]

    I VERY MUCH LIKE THE idea of taking one disorder -- say Parkinsons and see if we can 1) describe what is meant by Parkinsons at multiple levels of analysis 2) accumulate observations (genetic, circuit level, behavioral and environment-social) that contribute to individual variability - again at multiple levels - especially looking at those patients with more rapid or slower disease progression and 3) account for differences among cartoon models of the disease and the actual disease and 4) develop a dynamic understanding of disease progression and what deviations occur and why.

    I am not sure why David K disparages the use of the term brain state -- as I could see that there are constellations of factors at multiple levels that lead to a "healthy" functioning state adaptable to the context. ANs this space could be quite large. One could then imagine vulnerabilities or insults that could push the brain in ways that result in a "state" change such that the brain is now dysfunctional in a life context or loses adaptability. And one can further imagine a brain getting itself trapped in a part of brain space where it is hard to see that any perturbation (treatment) or slow recover processes would allow for recovery. It is probably not a coincidence that the numbers of individuals with severe brain disorders are about what one would expect from being in the very tales of a distribution. The individuals who are 2-3 standard deviations out are those for whom treatments could work -- but what moves someone and what keeps them?

    We need a framework that gives us some deep multi level understanding so that we can better access if tweaking X really does impact Z or is the tweak in X actually resulting in an adaptive response in Y that then impacts Z (or maybe stabilizes Z so it does not change or becomes resistant to the X perturbations).

    In aging I believe we need a better understanding of what happens to adaptive dynamic systems over time ("as the age" or over life span) so we know whether the changes we see are nothing more than what we should expect and maybe it only seems maladaptive because the environment changes or what we now expect our systems to do at different ages has changed. How do we keep our brains adaptive and responsive -- to continue to explore rather than exploit. This is a different challenge than diseases.

    Today I was very struck by our inability to work across levels or to even identify what level is meaningful for what we care about -- and what I care about is using neuroscience and complex systems to advance our understanding of and care for individuals with brain disorders - particularly disorders with no identifiable anatomical lesions. 25 years ago I initiated a program supporting neurorehab research on the premise that information learned about brain-function relationships should be useful in delineating what is and is not possible for recovery.

    We have to understand the dual nature of individual differences 1) the many to one mapping -- there may be lots of ways for us to use of our brains to live adaptably in the world and yet - there seem to be a small number of stereotypical ways that brains break.

    Mental health probably offers us the biggest challenges. If we could make a difference there -- even re-framing the way we currently think about these disorders - I think this would be a HUGE contribution.

    Could it be that mesoscale dysfunctions -- depression, schizophrenia could benefit from mesoscale interventions -- perhaps all the lower levels changes we come to catalog will then come along for the ride,

    For aging -- in pathology -- neurodegen -- treatments might require both a perturbation and a stabilization?

    log in to comment

    Roberto Cabeza (Duke University) Link to the source page[edit source]

    My brief presentation (I didn't give a talk) focused on the concept of “compensation” in cognitive neuroscience of aging and dementia, and the difficulties of interpreting patterns of changes of brain activity or connectivity as compensatory. I emphasized the need of linking these changes both to a deficit and to enhanced behavior, and the importance that the latter link is established at the intra-individual rather than the inter-individual level.

    The meeting was extremely interesting, particularly because it allowed exchanges between researchers with very different perspectives, which doesn't typically interact in standard scientific meetings. I found particularly exciting the idea of generating a theory of how the brain brakes that is not limited to one particular level of neuroscience analysis (e.g., molecular, cellular, systems) or one particular disorder or pathology.

    The meeting reminded me of a conference I helped organized in Montreal in 2017,in which the goal was to clarify terminology (such as the term "compensation") rather than just presenting new data. As in this meeting, we also worked with a small group of researchers, without an audience, focusing on thinking rather that on just presenting and seeing new data.

    log in to comment

    Jack Gallant (UC Berkeley) Link to the source page[edit source]

    I have three brief comments:

    (1) Regarding brain explanations. I agree with others that we should seek to explain behavior, at a fine-grained level, in terms of measurable brain functions. However, it is important to acknowledge that broad analogies involving architectures or cost functions will NOT do this. Those kinds of findings are interesting and possibly necessary, but not sufficient for explaining behavior except in the broadest strokes. What is needed is rich mathematical models that link brain measurements to behavior. When such models are available they can be translated into whatever expressive system is most useful for the purpose.

    (2) Regarding brain measurement. Neuroscience is currently strongly measurement-limited. We have a wide variety of tools, but each tool is limited in spatial resolution, temporal resolution or coverage, and most tools cannot be used in humans. Given this, the best that we can do is to use our measurements as efficiently as we can given our modeling/prediction goals. In the end, all brain measurements are merely different views of the same system, so they will all be correlated with one another to some extent and in the end they should all converge on the same explanation.

    (3) Regarding brain dynamics. The brain is a spatially distributed nonlinear dynamical system. To understand such a system requires that we recover the whole trajectory of the system through space-time. However, as noted above we are measurement-limited. We can recover the spatial marginal alone (e.g., in fMRI) , or the temporal marginal alone (e.g., in EEG), but we can't recover both simultaneously (except in very reduced systems or in very special local cases). The fact that we cannot recover the space-time trajectory of the system inevitably limits the provisional explanations and models that we can construct; it limits how well one can answer different kinds of questions; and it limits the usefulness of dynamical tools for analyzing and modeling our data today.

    log in to comment

    Russ Poldrack (Stanford University) Link to the source page[edit source]

    I thought that the discussions around the nature of distributed versus local function (arising from Caterina's talk} were really interesting and pointed to the way that our field uses these concepts very loosely. This dovetails, I think, with the issue that I raised about the disconnect between computational neuroscience and network neuroscience approaches.

    I can't say that my perspective changed, but the way that I think about how to express some of the ideas was definitely changed. In particular it was really useful to talk through the ways that different ontologies might be useful for different purposes.

    log in to comment

    Paul Garcia (Columbia University) Link to the source page[edit source]

    Brains fail - like bridges, like suspenders, and like relationships. The brain constantly operates at a point of criticality - at least for our optimal cognitive state. Adaptive systems with connectivity often have cascading failures. Can the brain heal itself? Is this robustness? Can that self-organization go wrong, or be improperly applied? It's not a bug - it's a feature. I am reminded of the "swiss-cheese model" familiar in root cause analysis. Multiple failures must be serially associated

    Why do we care about a diagnosis? What is meant by a "proper" diagnosis? Does "diagnosis" imply stationarity. Must we have a tight mechanism to have a diagnosis? Or perhaps simply a cluster of symptoms. Or a basis in which to guide therapy I approach my patients based on what is the next thing I am going to do. Sledgehammer solutions. - may be best. Borrowing from vaccines, can we look at treatments as "learning". Pain is an example for a top-down approach to disease. Much like traditional Chinese medicine.

    Highlight 1

    Richard's buzzing that described experimental determination of the boson and how it related to inconsistencies in Alzheimer's Disease was simultaneously the most confusing and the most entertaining part of the conference.

    log in to comment

    Randy McIntosh (University of Toronto) Link to the source page[edit source]

    The take home message for me was that the principles of how the brain is designed that makes it unique and similar to other complex systems. Some the terminology/features in complex systems can, and should, be applied to the brain, but how these features are realized may be unique to the brain. There are many methods used in empirical neuroscience that can provide a springboard to this, such as graph theory metrics, coherence measures, etc, but these should be conceived in the complex systems framework to how they support features like robustness, criticality, and cascading failures.

    The challenge will be to establish the common dialogue to build this bridge and the technological foundation to support it (e.g., modeling platforms for deep learning, dynamical systems that can take the empirical data as direct constraints).

    log in to comment

    Viktor Jirsa (Aix-Marseille Université) Link to the source page[edit source]

    Excellent presentations were given highlighting the descriptive power of convolutional deep networks, also illustrating its partial explanatory power and where it fails. This pointed out some interesting ways forward that have to go beyond their current architecture, in particular taking dynamics into account. Interplay between structural and functional connectivity was highlighted. Limitations of stationary metrics were evident (functional connectivity), but nicely showed how far it can be pushed successfully in applications. Model approaches providing explanatory approaches were often too simplistic, not in terms of realism, but in terms of simplifications of concepts (brain states, behavior, as static entities). In the discussions it was evident that there is a need for a formalisation of the internal state dynamics of the brain, before perturbations can be applied to it (breaking the brain). A formal frame work for provision of and recovery from such perturbations is needed, several good attempts were provided and need to be pursued in the future, and supported by data. Need for individual predictive capacity of these frameworks was highlighted rightfully.

    log in to comment

    Richard Frackowiak (Ecole Polytechnique Federale de Lausanne) Link to the source page[edit source]

    Presentation highlight was about AI techniques, didactic, informative and comprehensible - thanks Nikolaus Kriegskorte.

    There was tension between model and data led approaches.

    I had a relatively stable view of the methods by which functional and structural imaging mao to anatomy and local function in human brains. Those views were not shared, which meant a rethink is required. I remain unconvinced by what the resting state can inform us about mapping function and structure

    log in to comment

    Ehren Newman (Indiana Univ.) Link to the source page[edit source]

    Through many of the presentations, I found that there was a tension in identifying the right level or construct by which to think of brain function. In turn, the question was then, what does that mean about how the brain breaks.

    The hypotheses that emerged for me was that the healthy / young brain is flexible, with a plurality of ways it can generate an otherwise apparently singular behavior. As such, the loss of a single way is absorbable by the system without apparent change to behavior. To some extent, this seems to be driven by the prior that disfunction or breakage must be defined as a change in behavior. Given this framework, I would argue that 'the way the brain breaks' is that is simply stops being plastic. It ceases to evolve and adapt with the environment, and eventually this creates 'breaks' or apparently inappropriate behavior.

    An implication of this is that understanding how the brain breaks must take into consideration change in the of environment, not simply the brain. That is - an environment with a fixed set of statistical structure would be unlikely to ever reveal that a brain is broken.

    log in to comment

    Artemy Kolchinsky (SFI) Link to the source page[edit source]

    Two scattered thoughts after the meeting:

    • I very much enjoyed Jacopo's brief summary of noise-driven critical transitions & critical slowing down (that is, noise-driven escape across a barrier, from one metastable state to another). Jacopo also described a view of aging as the gradual lowering of the barrier, which results in a gradual increase of the probability of crossing the barrier. To me, it is the only real contender for a universal theory of aging and critical transitions in complex systems that is  mathematical and predictive. Unfortunately, it is not at all clear that this theory works well for brain aging or breakdown. In particular, it is not clear that its predictions (about increased variance and/or increase autocorrelation timescales) is what actually occurs in brain aging / decline. Also, it describes aging as an increase in the instantaneous probability of break down (total loss of function). This is quite different from how many people see aging, as a gradual decline of current function. It seems important to make this distinction.
    • I very much enjoyed Niko's talk, showing that correlations uncovered by deep nets are correlated with the representations that are used in the  human visual system. However, I am sympathetic to the critique made by Ross, that these correlations do not really provide us with a theory of how  e.g. the human visual system functions.  Rather, I see the implications of this work (as well as some other deep learning work) for our understanding of cognition to be the following lesson: simple distributed  architectures + learning by gradient descent (or some other very simple learning rule, like gradient) can be shockingly effective. In this sense, these theories are similar to previous "emergentist" theories, such as Darwin's or Adam Smith's, in which a simple iterated algorithm produces amazing outcomes... but while the algorithm is easy to understand, the outcomes can be incredibly intricate and complex. This suggests that, similarly to how evolutionary biologists must study environments and niches to understand adaptations uncovered by natural selection, we might have to study the structure of natural environments to understand the representations and mechanisms uncovered by simple learning rules.
    log in to comment

    Reference Materials by Presenting Attendees[edit source]

    Steven Petersen (Washington Univ.-St. Louis)[edit source]

    • Gratton et al. 2019 Cereb Cortex.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Emergent Functional Network Effects in Parkinson Disease Caterina Gratton, Jonathan M. Koller, William Shannon, Deanna J. Greene, Baijayanta Maiti, Abraham Z. Snyder, Steven E. Petersen, Joel S. Perlmutter, Meghan C. Campbell Cerebral cortex (New York, N.Y. : 1991) 2019 0 4 Download (Encrypted)

    David Krakauer (SFI)[edit source]

    Flack et al. 2012 summarizes our understanding of mechanisms that generate robustness (invariance of function to non-trivial perturbations) in biological and social systems. It provides a classification of these mechanisms in pursuit of more general principles that confer robustness at different time and space scales. 

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Robustness in biological and social systems Jessica Flack, Peter Hammerstein, David Krakauer Evolution and the Mechanisms of Decision Making 2012 0 31 Download (Encrypted)

    John Krakauer (Johns Hopkins Univ./SFI)[edit source]

    Both Newport et al. 2017 and Makin et al. question the idea of pluripotent cortical plasticity early or late in life, i.e, they throw doubt on the idea that areas can take on qualitatively new functions after injury.

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Reorganization in Adult Primate Sensorimotor Cortex: Does It Really Happen? Tamar R. Makin, Jorn Diedrichsen, John W. Krakauer Cognitive Neuroscience 0 0 Download (Encrypted)
    Revisiting Lenneberg’s Hypotheses About Early Developmental Plasticity: Language Organization After Left-Hemisphere Perinatal Stroke Elissa L. Newport, Barbara Landau, Anna Seydell-Greenwald, Peter E. Turkeltaub, Catherine E. Chambers, Alexander W. Dromerick, Jessica Carpenter, Madison M. Berl, William D. Gaillard Biolinguistics (Nicos) 2017 0 3 Download (Encrypted)
    Reference Materials by Non-presenting Attendees

    Jacopo Grilli (ICTP) Link to the source page[edit source]

    Podolsky et al find, In the context of regulatory networks and expression profiles, a connection between critical dynamics (the gene regulatory network is at the edge of stability) and aging. This link between criticality (often associated to "functionality" and flexibility) and aging is particularly intriguing also if translated into the context of neural networks and brain diseases.

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Critical dynamics of gene networks is a mechanism behind ageing and Gompertz law Dmitriy Podolskiy, Ivan Molodtsov, Alexander Zenin, Valeria Kogan, Leonid I. Menshikov, Vadim N. Gladyshev, Robert J. Shmookler Reis, Peter O. Fedichev q-bio.MN 2016 0 2 Download (Encrypted)

    Caterina Gratton (Northwestern Univ.) Link to the source page[edit source]

    • Warren et al. 2014 discusses a case where network models of the brain may help to provide information about behavioral disruptions after brain damage.
    • Gratton et al. 2018 reviews aspects of the forms of variation available in functional MRI measurements, which may constrain which types of questions different fMRI measures are best suited to addressing.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Network measures predict neuropsychological outcome after brain injury David E. Warren, Jonathan D. Power, Joel Bruss, Natalie L. Denburg, Eric J. Waldron, Haoxin Sun, Steven E. Petersen, Daniel Tranel Proceedings of the National Academy of Sciences of the United States of America 2014 0 3 Download
    Functional Brain Networks Are Dominated by Stable Group and Individual Factors, Not Cognitive or Daily Variation Caterina Gratton, Timothy O. Laumann, Ashley N. Nielsen, Deanna J. Greene, Evan M. Gordon, Adrian W. Gilmore, Steven M. Nelson, Rebecca S. Coalson, Abraham Z. Snyder, Bradley L. Schlaggar, Nico U.F. Dosenbach, Steven E. Petersen Neuron 2018 0 4 Download

    Dietmar Plenz (NIH) Link to the source page[edit source]

    • Meisel et al. 2017 demonstrates that sleep deprivation associated with rapid cognitive decline correlates with a deviation from critical dynamics quantified in the change in long-term temporal correlations or critical slowing down.
    • Seshadri et al. 2018: using an animal model for schizophrenia, it is shown that a hallmark of the disease – loss of working memory – correlates with deviation from avalanche dynamics. Memory performance and critical dynamics can be acutely rescued with the NMDA receptor agonist D-serine.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Decline of long-range temporal correlations in the human brain during sustained wakefulness Christian Meisel, Kimberlyn Bailey, Peter Achermann, Dietmar Plenz Scientific Reports 2017 0 3 Download
    Altered avalanche dynamics in a developmental NMDAR hypofunction model of cognitive impairment Saurav Seshadri, Andreas Klaus, Daniel E. Winkowski, Patrick O. Kanold, Dietmar Plenz Translational Psychiatry 2018 0 4 Download
    Neuronal avalanches and coherence potentials D. Plenz European Physical Journal: Special Topics 2012 0 1 Download (Encrypted)
    Coherence potentials: Loss-less, all-or-none network events in the cortex Tara C. Thiagarajan, Mikhail A. Lebedev, Miguel A. Nicolelis, Dietmar Plenz PLoS Biology 2010 0 1 Download (Encrypted)

    Susan Fitzpatrick (JSMF) Link to the source page[edit source]

    Borsboom et al. 2019 challenges the idea that reductionist approaches are appropriate for studying complex human neurological disorders and suggests that network approaches might offer alternative conceptualizations explaining dysfunction.  Do network approaches offer novel ways to both explain and intervene on “broken” brains?

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Brain disorders? Not really: Why network structures block reductionism in psychopathology research Denny Borsboom, Angélique O.J. Cramer, Annemarie Kalis Behavioral and Brain Sciences 2019 0 0 Download (Encrypted)
    Searching for rewards like a child means less generalization and more directed exploration Eric Schulz, Charley M. Wu, Azzurra Ruggeri, Björn Meder bioRxiv 2018 0 11
    The Effects of APOE and ABCA7 on Cognitive Function and Alzheimer’s Disease Risk in African Americans: A Focused Mini Review Chelsie N. Berg, Neha Sinha, Mark A. Gluck Front. Hum. Neurosci. 2019 0 1 Download (Encrypted)

    Roberto Cabeza (Duke University) Link to the source page[edit source]

    Cabeza et al. (2018) is a consensus opinion paper on of three popular terms in the cognitive neuroscience of aging and dementia, which are all related to the concept of robustness: reserve, maintenance, and compensation. "Reserve" is defined as the cumulative improvement, due to genetic and/ or environmental factors, of neural resources that mitigates the effects of neural decline caused by aging or age-related diseases. "Maintenance" refers to the preservation of neural resources, which entails ongoing repair and replenishment of the brain in response to damage incurred at cellular and molecular levels due to ‘wear and tear.’ Finally, "compensation" refers to the cognition-enhancing recruitment of neural resources in response to relatively high cognitive demand.

    Cabeza, Stanley, and Moscovitch (2018) argue that, compared to large-scale networks, cognitive theories are easier to relate to mini-networks called process specific alliances (PSAs). A PSA is small team of brain regions that rapidly assemble to mediate a cognitive process in response to task demands but quickly disassemble when the process is no longer needed.

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Maintenance, reserve and compensation: the cognitive neuroscience of healthy ageing Roberto Cabeza, Marilyn Albert, Sylvie Belleville, Fergus I. M. Craik, Audrey Duarte, Cheryl L. Grady, Ulman Lindenberger, Lars Nyberg, Denise C. Park, Patricia A. Reuter-Lorenz, Michael D. Rugg, Jason Steffener, M. Natasha Rajah Nature Reviews Neuroscience 2018 0 47 Download (Encrypted)
    Process-Specific Alliances (PSAs) in Cognitive Neuroscience Roberto Cabeza, Matthew L. Stanley, Morris Moscovitch Trends in Cognitive Sciences 2018 0 2 Download (Encrypted)

    Jack Gallant (UC Berkeley) Link to the source page[edit source]

    • Poeppel D. 2012 nicely lays out one of the central challenges of using brain data to understand mind and behavior: the elements of psychological models are incommensurate with brain measurements. Failure to recognize this problem has hobbled cognitive neuroscience and its applications to medicine.
    • Huth et al. 2016 (from the Gallant group) shows how high-dimensional functional mapping can be performed in single individuals, and how we can predict individualized functional maps using a statistical model that reflects the variance and covariance of brain anatomy and brain function across individuals.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language David Poeppel Cognitive Neuropsychology 2012 0 13 Download (Encrypted)
    Natural speech reveals the semantic maps that tile human cerebral cortex Alexander G. Huth, Wendy A. De Heer, Thomas L. Griffiths, Frédéric E. Theunissen, Jack L. Gallant Nature 2016 0 9 Download (Encrypted)
    Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning 0 1

    Russ Poldrack (Stanford University) Link to the source page[edit source]

    • D. Yamins: This paper lays out the approach of using task-driven modeling to predict neuronal signals, and more generally describes a novel and very different way of thinking about how to characterize brain function using computational models.
    • Avena-Koenigsberger et al (2017): This paper is the first that I know of to discuss seriously the relationship between network communication and brain computation.  
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Communication dynamics in complex brain networks Andrea Avena-Koenigsberger, Bratislav Misic, Olaf Sporns Nature Reviews Neuroscience 2018 0 31 Download (Encrypted)
    An Optimization-Based Approach to Understanding Sensory Systems Daniel Yamins Cognitive Neuroscience 2019 0 39 Download (Encrypted)

    Paul Garcia (Columbia University) Link to the source page[edit source]

    • Wittmann M. 2015 is a good review on modulators of time perception.
    • The Morandi et al. 2017 outlines a common clinical scenario (acute brain failure) complicating medical care in aging patients.
    • Hasenkamp and Barsalou 2012 article puts a systems neuroscience framework over volitional control of focusing attention.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Modulations of the experience of self and time Marc Wittmann Consciousness and Cognition 2015 0 4 Download
    The Diagnosis of Delirium Superimposed on Dementia: An Emerging Challenge Alessandro Morandi, Daniel Davis, Giuseppe Bellelli, Rakesh C. Arora, Gideon A. Caplan, Barbara Kamholz, Ann Kolanowski, Donna Marie Fick, Stefan Kreisel, Alasdair MacLullich, David Meagher, Karen Neufeld, Pratik P. Pandharipande, Sarah Richardson, Arjen J.C. Slooter, John P. Taylor, Christine Thomas, Zoë Tieges, Andrew Teodorczuk, Philippe Voyer, James L. Rudolph Journal of the American Medical Directors Association 2017 0 5
    Effects of Meditation Experience on Functional Connectivity of Distributed Brain Networks Wendy Hasenkamp, Lawrence W. Barsalou Frontiers in Human Neuroscience 2012 0 5 Download

    Randy McIntosh (University of Toronto) Link to the source page[edit source]

    • McIntosh & Jirsa 2019 present a dynamical systems framework - Structured Flows on Manifolds - that posits that neural processes are flows depicting system interactions that occur on relatively low-dimension manifolds, which constrain possible functional configurations. Such constraints allow us to characterize the actual and potential configurations of brain networks and provide a new perspective wherein behavior deficits from pathological processes could be either the emergence of an existing repertoire or the adaptation of the system to damage.
    • Corbetta et al 2018 propose that large-scale nerwork abnormalities following a stroke reduce the variety of neural states visited during task processing and at rest, resulting in a limited repertoire of behavioral states. The emphasis here is on the changes in the dimensionality of brain and behavior dynamics and whether explicitly linking the two would provide a better characterization of the deficits and adaptation following stroke.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    The Hidden Repertoire of Brain Dynamics and Dysfunction Anthony R. McIntosh, Viktor K. Jirsa bioRxiv 2019 0 7 Download
    On the low dimensionality of behavioral deficits and alterations of brain network connectivity after focal injury MaurizioCorbetta, Joshua S. Siegel, Gordon L. Shulman Cortex 2018 0 6 Download (Encrypted)

    Viktor Jirsa (Aix-Marseille Université) Link to the source page[edit source]

    Pillai & Jirsa 2017 argue that critical to our understanding of brain function is an appropriate representation of behavior, which then is to be placed in relation with brain network activity in space and time. Such representation must be based on dynamics (as opposed to derivatives thereof such as singular data features) and establishes the link between network structure and function.

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Symmetry Breaking in Space-Time Hierarchies Shapes Brain Dynamics and Behavior Ajay S. Pillai, Viktor K. Jirsa Neuron 2017 0 13 Download (Encrypted)

    Richard Frackowiak (Ecole Polytechnique Federale de Lausanne) Link to the source page[edit source]

    • Translation in cognitive neuroscience remains beyond the horizon, brought no closer by claimed major advances in our understanding of the brain. Nachev et al., propose that adequate individualisation, needed for accurate diagnosis, requires models of far greater dimensionality than has been usual in the field. This necessity arises from the widely distributed causality of neural systems, a consequence of the fundamentally adaptive nature of their developmental and physiological mechanisms.   
    • A proposal that, in the next quarter century, advances in “cartography” will result in progressively more accurate drafts of a data-led, multi-scale model of normal, abnormal and even adapting, whole human brain structure and function. These draft blueprints will result from analysis of large volumes of neuroscientific and clinical data, by an iterative process of reconstruction, modelling and simulation.
    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    The future of human cerebral cartography: A novel approach Richard Frackowiak, Henry Markram Philosophical Transactions of the Royal Society B: Biological Sciences 2015 0 7 Download (Encrypted)
    Lost in translation Parashkev Nachev, Geraint Rees, Richard Frackowiak F1000Research 2019 0 7 Download (Encrypted)

    Ehren Newman (Indiana Univ.) Link to the source page[edit source]

    Related to the discussion of flexible distributed processing (that came up with Caterina's presentation) - there is a great paper showing how the neural code evolves despite stability in the bird song.

    Liberti, W. A., Markowitz, J. E., Perkins, L. N., Liberti, D. C., Leman, D. P., Guitchounts, G., et al. (2016). Unstable neurons underlie a stable learned behavior. Nature Neuroscience, 19(12), 1665–1671. '"`UNIQ--nowiki-00000550-QINU`"'

    Title Author name Source name Year Citation count From Scopus. Refreshed every 5 days. Page views Related file
    Unstable neurons underlie a stable learned behavior William A. Liberti, Jeffrey E. Markowitz, L. Nathan Perkins, Derek C. Liberti, Daniel P. Leman, Grigori Guitchounts, Tarciso Velho, Darrell N. Kotton, Carlos Lois, Timothy J. Gardner Nature Neuroscience 2016 0 2
    A Neural Network Model of Retrieval-Induced Forgetting Kenneth A. Norman, Ehren L. Newman, Greg Detre Psychological Review 2007 0 1
    Cholinergic modulation of cognitive processing: Insights drawn from computational models Ehren L. Newman, Kishan Gupta, Jason R. Climer, Caitlin K. Monaghan, Michael E. Hasselmo Frontiers in Behavioral Neuroscience 2012 0 0

    General Meeting Reference Material[edit source]