Thursday, Mar 8, 2018
8:30-9:00am breakfast
9:00-9:10am opening remarks
9:10-10:00am Viktor Jirsa, Translational Neuroscience: from bifurcations to personalized medicine
10:00-10:20am Rashid Williams-García, From Single Neurons to Perception: Examining the Basis for Sensory Deficits in Autism
10:20-10:50am coffee break
10:50-11:40am Stephanie Jones, Integrated Human, Modeling, & Animal Research Reveals Novel Mechanisms & Meaning of Brain Rhythms
11:40-12:00am Dan Wilson, A Possible Causal Link Between Depressed Heart Rate Variability and Sudden Cardiac Death
12:00-1:40pm lunch break
1:40-2:30pm Carina Curto, Emergent dynamics from network connectivity: a minimal model
2:30-3:20pm Mark Goldman, Microcircuits for short-term memory storage, motor control, and neural integration
3:20-3:50pm coffee break
3:50-4:10pm James Hengenius, Olfactory Navigation in Complex Odor Environments
4:10-5:00pm Nicolas Brunel, From synaptic plasticity rules to network dynamics
Friday, Mar 9, 2018
8:40-9:10am breakfast
9:10-10:00am Maxim Bazhenov, Continuous learning, sleep and memory consolidation
10:00-10:20am Hannah Bos, Dissecting stability and gain modulation in interneuron circuits
10:20-10:50am coffee break
10:50-11:40am Juan Restrepo, Multilayer network regulation of critical neuronal dynamics
11:40-12:00am Chengcheng Huang, Propagation and modulation of information in visual pathway
12:00-1:40pm lunch break
1:40-2:30pm Stefano Fusi, Encoding multiple abstract variables in prefrontal cortex and in the hippocampus
2:30-3:20pm Misha Tsodyks, Information recall from long-term memory: theory vs experiment
3:20-3:50pm coffee break
3:50-4:10pm Yury Sokolov, Ignition of network bursts in the pre-Bötzinger complex: a simplified model
4:10-5:00pm Tatyana Sharpee, Cortical representation of natural stimuli
Saturday, Mar 10, 2018
8:40-9:10am breakfast
9:10-10:00am James MacLaurin, A variational method for analyzing oscillations in stochastic hybrid systems
10:00-10:20am Ryan Phillips, The role of CAN currents in the respiratory Pre-Bötzinger Complex
10:20-10:50am coffee break
10:50-11:40am Andrea Barreiro, Constraining neural networks with spiking statistics
11:40-11:50am closing remarks

Translational Neuroscience: from bifurcations to personalized medicine
Viktor Jirsa, Centre national de la recherche scientifique, France
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. This approach has been successfully applied to the modeling of the resting state dynamics of individual human brains, as well as clinical situations including stroke and epilepsy research. Here I will illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other brain regions and propagate activity through large brain networks, which comprise brain regions that are not necessarily epileptogenic. The identification of the EZ is crucial for candidates for neurosurgery and requires unambiguous criteria that evaluate the degree of epileptogenicity of brain regions. Stability analyses of propagating waves provide a set of indices quantifying the degree of epileptogenicity and predict conditions, under which seizures propagate to nonepileptogenic brain regions, explaining the responses to intracerebral electric stimulation in epileptogenic and nonepileptogenic areas. These results provide guidance in the presurgical evaluation of epileptogenicity based on electrographic signatures in intracerebral electroencephalograms and have been validated in small-scale clinical trials. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models.

From Single Neurons to Perception: Examining the Basis for Sensory Deficits in Autism
Rashid Williams-García, University of Pittsburgh
Sensory deficits, such as hyper- and hyposensitivity, as well as sensation avoidance and seeking behaviors, are frequently associated with autism spectrum disorders (ASDs). Quantitative differences in the properties and responses of individual sensory neurons—and their networks—compared to their neurotypical counterparts, potentially drive these deficits. Empirical studies suggest that neural networks from autistic individuals and animal models feature altered neuronal excitability, connectivity, and stimulus response variability. The precise link between these alterations and the behavioral symptoms, though as yet unknown, is key to understanding ASDs. In this talk, I present some recent progress in examining this link using a computational approach. Individual neurons and their local interactions are simulated to examine the relationship between spike train correlations and variability, neuronal excitability, synaptic strength, and spike frequency adaptation (SFA). Our findings indicate that impaired SFA and weakened synaptic strengths increase spike train variability and neuronal excitability, a combination which might have consequences for sensory processing.

Integrated Human, Modeling, & Animal Research Reveals Novel Mechanisms & Meaning of Brain Rhythms
Stephanie Jones, Brown University
Low frequency brain oscillation, including those in the beta frequency band (15-29Hz), are among the most dominant signals recorded non-invasively in humans with electro- and magneto-encephalography. Beta rhythms are predictive of healthy and abnormal behaviors, including perception, attention and motor action. Yet, how and why beta impacts function is debated. We’ve combined human magnetoencephalography (MEG), computational neural modeling and invasive animal recordings to investigate the role of beta in sensory perception. Our data show beta emerges as transient high power ’events’. We find that functionally relevant differences in averaged beta power in primary somatosensory neocortex reflect a difference in the number of high-power prestimulus beta events per trial, i.e. event rate, as opposed to changes in event amplitude or duration. Further, beta events occurring close to the stimulus were more likely to impair perception. These results are consistent across detection and attention tasks in MEG, and in local field potentials from mice performing a detection task. These results imply that an increased propensity of beta events predicts the failure to effectively transmit sensory information. Our model results suggest circuit mechanisms by which this failure occurs, providing unprecedented insight in the causal role of transient beta activity in sensory perception.

A Possible Causal Link Between Depressed Heart Rate Variability and Sudden Cardiac Death
Dan Wilson, University of Pittsburgh
Depressed heart rate variability is a well-established risk factor for sudden cardiac death in survivors of acute myocardial infarction and for those with congestive heart failure. However, it remains unknown whether this is a causal relationship or whether heart rate variability simply correlates with the severity of cardiac damage. Here, we suggest a causal link between depressed heart rate variability and the propensity for the development of more deadly arrhythmias. In numerical simulations, we observe an inverse relationship between the variance of stochastic pacing and the occurrence of spatially discordant alternans, an arrhythmia which is widely believed to facilitate the development of cardiac fibrillation. By analyzing the effect of conduction velocity restitution, cellular dynamics, electrotonic coupling, and stochastic pacing on the nodal dynamics of discordant alternans, we provide intuition for this observed behavior and propose strategies to inhibit discordant alternans.

Emergent dynamics from network connectivity: a minimal model
Carina Curto, Pennsylvania State University
Many networks in the brain display internally-generated patterns of activity -- that is, they exhibit emergent dynamics that are shaped by intrinsic properties of the network rather than inherited from an external input.  While a common feature of these networks is an abundance of inhibition, the role of network connectivity in pattern generation remains unclear. In this talk I will introduce Combinatorial Threshold-Linear Networks (CTLNs), which are simple "toy models" of recurrent networks consisting of threshold-linear neurons with effectively inhibitory interactions.  The dynamics of CTLNs are controlled solely by the structure of an underlying directed graph.  By varying the graph, we observe a rich variety of emergent dynamics including: multistability, neuronal sequences, and complex rhythms.  These patterns are reminiscent of population activity in cortex, hippocampus, and central pattern generators for locomotion.  I will present some theorems about CTLNs, and explain how they allow us to predict features of the dynamics by examining properties of the underlying graph.

Microcircuits for short-term memory storage, motor control, and neural integration
Mark Goldman, University of Califonia, Davis
In this talk, I will highlight two theory-experiment collaborations to understand core principles of neural circuit dynamics and plasticity. In the first half of the talk, I will discuss our efforts to dissect the cellular and circuit mechanisms underlying the accumulation and storage of information in short-term memory. Working in a model system for the study of persistent neural activity and neural integration, the oculomotor integrator brain region that is responsible for stably maintaining the position of the eyes, I will describe our collaborative efforts to extract principles of circuit organization from a biophysically realistic model of this system. In the second half of the talk, I will discuss an experimental collaboration to determine the sites of plasticity underlying a simple eye movement behavior, the vestibulo-ocular reflex. This reflex causes the eyes to counter-rotate during head turns, enabling the eyes to maintain their gaze on a target despite head motion. Plasticity mechanisms mediated by the cerebellum can adjust the gain (ratio of eye movement to head movement) of this reflex. However, despite the seeming simplicity of this reflexive behavior, we show that inferring even the sites and signs (LTP vs. LTD) of plasticity can be highly challenging due to the presence of feedback loops in the neural circuitry and through the environment. In both halves of the talk, challenges and approaches for disambiguating different possible models underlying neural and behavioral data will be highlighted.

Olfactory Navigation in Complex Odor Environments
James Hengenius, University of Pittsburgh
Olfaction is an evolutionarily ancient sense and olfactory cues are used across Animalia to guide navigation. However, on the spatial scales of animal navigation, turbulent air flow would seem to prevent animals from using simple gradient estimation techniques. Comparing numerical simulation of simple navigation algorithms with animal behavioral data, I will show that biologically-plausible spatiotemporal concentration comparisons are sufficient to locate odor sources in simulated stochastic odor environments. I will further demonstrate that these algorithms are robust enough to locate odor sources in experimentally-measured turbulent flow fields. I will show that the search algorithms exhibit “stochastic resonance,” performing optimally when moderate levels of noise are added to navigation decisions. Finally, I will discuss some applications of these algorithms and future work modeling odor detection in the olfactory bulb.

From synaptic plasticity rules to network dynamics
Nicolas Brunel, Duke University
Models of synaptic plasticity capture experimental data on plasticity with increasing accuracy, but it is still unclear how realistic synaptic plasticity rules shape network dynamics, and information storage in such networks.  In this talk, I will first review approaches for inferring learning rules from data in cortical synapses. The first consists in fitting a biophysical model based on calcium influx in the post-synaptic spine to a set of in vitro experiments. The second consists in inferring the learning rule from in vivo data, using experiments that compare the statistics of responses of neurons to sets of novel and familiar stimuli.  I will then show how the inferred learning rules shape network dynamics, and in particular how it can lead to attractor dynamics.

Continuous learning, sleep and memory consolidation
Maxim Bazhenov, University of California, San Diego
Memory depends on three general processes: encoding, consolidation and retrieval. Although the vast majority of research has been devoted to understanding encoding and retrieval, recent novel approaches have been developed in both human and animal research to probe mechanisms of consolidation. A story is emerging in which important functions of consolidation occur during sleep and that specific features of sleep appear critical for successful retrieval across a range of memory domains, tasks, and species. In my talk I will first discuss the neuronal and network level mechanisms behind major sleep EEG rhythms and experimental data on memory consolidation. I will then present our new results, obtained in computer simulations, to reveal the neural substrates of memory consolidation involving replay of memory specific sequences of spikes. Our study predicts that spontaneous reactivation of the learned sequences during sleep spindles and slow waves of NREM sleep represents a key mechanism of memory consolidation and the basic structure of sleep stages provides an optimal environment for consolidation of competing memories.

Dissecting stability and gain modulation in interneuron circuits
Hannah Bos, University of Pittsburgh
Inhibition is involved in two opposing mechanisms: controlling the responsiveness (gain) of excita- tory (E) neurons and maintaining network stability. Interneurons subdivide into vasoactive intestinal- peptide (VIP), somatostatin (SOM) and parvalbumin (PV) expressing neuron classes. However, it is not clear how gain modulation and stability mechanisms are simultaneously performed in one circuit. Inhibition can suppress the activity of E neurons by direct projections or increase their activity by inhibiting an intermediate interneuron (disinhibition). For the latter, two main pathways have been suggested: SOM neurons inhibit PV neurons which disinhibit E neurons (Xu et al., 2013) and VIP neurons inhibit SOM neurons which disinhibit E neurons (Fu et al., 2014). In this study, we ask how these disinhibitory pathways perform in a recurrently connected circuit with respect to increasing gain while keeping noise correlations low (which are a reflection of network stability). We investigate potential roles of the interneurons by applying theoretical tools developed for balanced and finite-size network dynamics, as well as simulations of spiking neurons. We find that the SOM→PV→E pathway initially shows a gain maximum for moderate SOM rates. Further modulation induces either pathologically large fluctuations or the transition to a silent state. In contrast, disinhibition via VIP and SOM neurons exhibits smooth gain modulation accompanied with an initial rise in noise correlations which eventually saturates to a lower value. Stability of this pathway is improved by SOM neurons projecting to both, E and PV neurons. SOM neurons tend to suppress rather than stabilize the activity of their targets and recurrent connections between E and SOM neurons suppresses noise correlations but reduce gain. Thus, we predict that gain modulation and stability are most effective when carried out by separate interneurons subtypes.

Multilayer network regulation of critical neuronal dynamics
Juan Restrepo, University of Colorado Boulder
It has been hypothesized that the cortex operates in a regime where the overall strength of excitatory and inhibitory synapses is balanced, and that this balance has functional advantages for information processing. In this talk, I will first describe how experiments testing this hypothesis have been guided and reproduced by relatively simple models of stochastic binary neurons. Then I will discuss how the balance between excitatory and inhibitory signals can be maintained in the presence of destabilizing factors like synaptic plasticity. In particular, we propose a mechanism that regulates the activity of the neural network by the transport of metabolic resources through a secondary network of glial cells. For a large range of parameter models, the interaction between the two networks spontaneously results in balanced excitation and inhibition. In this regime the neural network produces power-law distributed avalanches of activity as observed in experiments. Furthermore, the glial network protects the system against the potentially destabilizing effect of heterogeneities in parameters. A simplified model can be analyzed in terms of a 3-dimensional map, giving insight into the robustness of these results to the choice of model parameters.

Propagation and modulation of information in visual pathway
Chengcheng Huang, University of Pittsburgh
How neuronal variability impacts neural codes is a central question in systems neuroscience, often with complex and model dependent answers. Most population models are parametric, with tacitly assumed structure of neuronal tuning and population variability. While these models provide key insights, they cannot inform how the physiology and circuit wiring of cortical networks impact information flow. In this work, we study information propagation in spatially ordered neuronal networks. We focus on the effects of feedforward and recurrent projection widths relative to columnar width, as well as attentional modulation. We show that narrower feedforward projection width increases the saturation rate of information. In contrast, the recurrent projection width with spatially balanced excitation and inhibition has small effects on information. Further, we show that attention improves information flow by suppressing the internal dynamics of the recurrent network.

Encoding multiple abstract variables in prefrontal cortex and in the hippocampus
Stefano Fusi, Columbia University

Information recall from long-term memory: theory vs experiment
Misha Tsodyks, Weizmann Institute of Science, Israel
Compared to artificial memory systems, human memory is much less efficient in terms of information recall that quite often fails. The reasons for this are not clear. I will present a phenomenological theory of recall that is based on few basic assumptions about information representation in memory and the way it is retrieved. Surprisingly, the theory accounts for some classical results in free recall experiments where subjects are required to remember randomly assembled lists of words. The theory can also be extended to consider hierarchical recall where information is grouped into several lists. In this case, better recall is predicted by the model, which is also consistent with experimental results. 

Ignition of network bursts in the pre-Bötzinger complex: a simplified model
Yury Sokolov, University of Pittsburgh
Network (population) bursts are a signature neuronal activity in the respiratory rhythm-generating brainstem region, the pre-Bötzinger complex (pre-BötC). It has been observed that pre-BötC undergoes several dynamic transitions to generate a network burst. Over the last decades different models were proposed to describe the transitions. While these models capture bio-physical features of the activity well, due to their complexity it is hard to get full insight into the influence of the underlying graph on the transitions. In order to overcome this, we propose a simplified discrete model, which is based on bootstrap percolation. In this talk, I will describe the model and address the question of how different graph models that are used in modeling of pre-BötC may effect network bursts. In addition, I will provide possible reasons why the network may fail to generate a population burst after deletion of a fixed fraction of arbitrary nodes in the network, which is consistent with laser ablation of Dbx1 neurons in experiments.

Cortical representation of natural stimuli
Tatyana Sharpee, Salk Institute
In this talk I will describe our recent findings of several organizing principles for how natural visual and auditory stimuli are presented across stages of cortical processing. For visual processing, I will describe how signals in the secondary cortical visual area build on the outputs provided by the first cortical visual area, and how they relate to representation found in subsequent visual areas, such as area V4. I will also discuss differences in how the auditory and visual systems achieve invariance. We find that auditory neurons gain invariance primarily along suppressive dimensions, whereas visual neurons gain invariance by integrating positive responses.

A variational method for analyzing oscillations in stochastic hybrid systems
James MacLaurin, University of Utah
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this talk we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate epsilon^{-1}. That is, we show that for a constant C, the probability that the time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp(-Ca/epsilon).

The role of CAN currents in the respiratory Pre-Bötzinger Complex
Ryan Phillips, University of Pittsburgh
The pre-Bötzinger complex (pre-BötC) is an essential rhythmogenic brainstem nucleus located in the ventrolateral medulla. Rhythmic output from the pre-BötC is relayed through premotor and motor neurons to the diaphragm and intercostal muscles to drive the active inspiratory phase of breathing. The specific biophysical mechanisms responsible for generating rhythmic bursting and network oscillations are not well understood and remain a highly debated topic within the field. Through extensive experimental and theoretical work two plausible rhythmogenic mechanisms have emerged. One mechanism is based on a slowly inactivating persistent sodium current INaP, and the other on a calcium activated non-selective cation current ICAN,that is coupled to intracellular calcium transients. A recent study has shown that blocking ICAN affects pattern formation, but not rhythm generation (Koizumi et al., 2018). In order to understand these results we systematically investigated the role of ICAN and two general sources of intracellular calcium transients in a biophysically based model of the pre-BötC. In our model, we find that blockade of ICAN best matches experimental data when intracellular calcium transients are triggered through synaptic mechanisms. Furthermore, activation of ICAN by synaptically-triggered calcium transients functions as a mechanism to amplify the inspiratory drive potential and recruit follower neurons. The results of these simulations suggest that rhythm generation in the pre-BötC arises from a group of INaP dependent pacemaker neurons, which form a rhythmogenic kernel. Output from these neurons triggers post-synaptic calcium transients, ICAN activation, and subsequent membrane depolarization, which drives bursting in follower neurons.

Constraining neural networks with spiking statistics
Andrea Barreiro, Southern Methodist University
As experimental tools in neuroscience have advanced, measuring whole-brain dynamics with single-neuron resolution is becoming closer to reality. However, a task that remains technically elusive is to measure the interactions within and across brain regions that govern such system-wide dynamics. We propose a method to derive constraints on hard-to-measure neural network attributes --- such as inter-region synaptic strengths --- using easy-to-measure spiking statistics.
The analysis that we perform here has two components: first, we propose a closure formula for multi-population firing rate models (mathematically, a coupled system of stochastic differential equations) which allows fast evaluation of equilibrium statistics. Second, fast evaluation allows us to rapidly survey a high-dimensional parameter space describing admissible networks, to find which part of parameter space is consistent with the experimental data.
As a test case, we studied interactions in the olfactory system. We used two micro-electrode arrays to simultaneously record from olfactory bulb (OB) and anterior piriform cortex (PC) of anesthetized rats who were exposed to several odors. We were able to make several predictions about the network, notably that inhibition within the afferent region (OB) and inhibition within PC were constrained to a narrow slice of possible values. While the analysis was performed on a simplified network model, the predictions were validated in a more realistic spiking model of the OB-PC pathway.