The class focuses on quantitative studies of problems in systems neuroscience. Topics will include lateral inhibition, mechanisms of motion tuning, local learning rules and their consequences for network structure and dynamics, oscillatory dynamics and synchronization across brain circuits, and formation and computational properties of topographic neural maps. The course will combine discussions and presentations, in which students and faculty will examine and present papers on systems neuroscience, usually combining experimental and theoretical/modeling components. Example student presentation are included here [1], [2].
Delta: Wenying Zhu, Noah Guzman, Laura Luebbert, Masami Hazu, Andrew Perez, Sam Schulte
Epsilon: Varun Wadia, George Barnum, Yue Xu, Yujing Yang, Charles Sanfiorenzo, Avinash Nanjundiah, George Lopez
Zeta: Adi Nair, Kevin Mei, Pantelis Vafeidis, Yameng Zhang, Ana Moiseyenko, Whitney Griggs, Zikun Zhu, Tess Marlin, Richard Li
Class Date: April 8th, 2020
From overshoot to voltage clamp
Authors: Andrew F. Huxley
Journal: Trends in Neurosciences
Year: 2002
Abstract: In 1939, A.L. Hodgkin and I found that the nerve action potential shows an ‘overshoot’ – that is, the interior of the fibre becomes electrically positive during an action potential. In 1948, we did our first experiments with a voltage clamp to investigate the current–voltage relations of the nerve membrane. Between those dates, we spent much time speculating about the mechanism by which ions cross the membrane and how the action potential is generated. This article summarizes these speculations, none of which has been previously published.
The Squid and its Giant Nerve Fiber
Excerpts from the film The Squid and its Giant Nerve Fiber, rehosted from Biological Sciences 300/301 at Smith College.
Electric impedance of the squid giant axon during activity
Authors: Kenneth S. Cole and Howard J. Curtis
Journal: Journal of General Physiology
Year: 1939
Abstract: Alternating current impedance measurements have been made over a wide frequency range on the giant axon from the stellar nerve of the squid, Loligo pealii, during the passage of a nerve impulse. The transverse impedance was measured between narrow electrodes on either side of the axon with a Wheatstone bridge having an amplifier and cathode ray oscillograph for detector. When the bridge was balanced, the resting axon gave a narrow line on the oscillograph screen as a sweep circuit moved the spot across. As an impulse passed between impedance electrodes after the axon had been stimulated at one end, the oscillograph line first broadened into a band, indicating a bridge unbalance, and then narrowed down to balance during recovery. From measurements made during the passage of the impulse and appropriate analysis, it was found that the membrane phase angle was unchanged, the membrane capacity decreased about two per cent, while the membrane conductance fell from a resting value of 1000 ohm cm2 to an average of twenty five ohm cm2.
The onset of the resistance change occurs somewhat after the start of the monophasic action potential, but coincides quite closely with the point of inflection on the rising phase, where the membrane current reverses in direction, corresponding to a decrease in the membrane electromotive force. This E.M.F. and the conductance are closely associated properties of the membrane, and their sudden changes constitute, or are due to, the activity which is responsible for the all-or-none law and the initiation and propagation of the nerve impulse. These results correspond to those previously found for Nitella and lead us to expect similar phenomena in other nerve fibers.
Measurement of current-voltage relations in the membrane of the giant axon of Loligo
Authors: Alan L. Hodgkin, Andrew F. Huxley, and Bernard Katz
Journal: Journal of Physiology
Year: 1952
Abstract: The importance of ionic movements in excitable tissues has been emphasized by a number of recent experiments. On the one hand, there is the finding that the nervous impulse is associated with an inflow of sodium and an outflow of potassiuim (eg Rothenberg, 1950; Keynes & Lewis, 1951). On the other, there are experiments which show that the rate of rise and amplitude of the action potential are determined by the concentration of sodium in the external medium (eg Hodgkin & Katz, 1949a; Huxley & Stiimpffi, 1951). Both groups of experiments are consistent with the theory that nervous conduction depends on a specific increase in permeability which allows sodium ions to move from the more concentrated solution outside a nerve fibre to the more dilute solution inside it. This movement of charge makes the inside of the fibre positive and provides a satisfactory explanation for the rising phase of the spike. Repolarization during the falling phase probably depends on an outflow of potassium ions and may be accelerated by a process which increases the potassium permeability after the action potential has reached its crest (Hodgkin, Huxley & Katz, 1949).
Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo
Authors: Alan L. Hodgkin and Andrew F. Huxley
Journal: Journal of Physiology
Year: 1952
Abstract: In the preceding paper (Hodgkin, Huxley & Katz, 1952) we gave a general description of the time course of the current which flows through the membrane of the squid giant axon when the potential difference across the membrane is suddenly changed from its resting value, and held at the new level by a feed-back circuit ('voltage clamp' procedure). This article is chiefly concerned with the identity of the ions which carry the various phases of the membrane current.
The components of membrane conductance in the giant axon of Loligo
Authors: Alan L. Hodgkin and Andrew F. Huxley
Journal: Journal of Physiology
Year: 1952
Abstract: The flow of current associated with depolarizations of the giant axon of Loligo has been described in two previous papers (Hodgkin, Huxley & Katz, 1952; Hodgkin & Huxley, 1952). These experiments were concerned with the effect of sudden displacements of the membrane potential from its resting level (V = 0) to a new level (V = Vj). This paper describes the converse situation in which the membrane potential is suddenly restored from V = V1 to V = 0. It also deals with certain aspects of the more general case in which V is changed suddenly from V1 to a new value V2. The experiments may be conveniently divided into those in which the period of depolarization is brief compared to the time scale of the nerve and those in which it is relatively long. The first group is largely concerned with movements of sodium ions and the second with movements of potassium ions.
The dual effect of membrane potential on sodium conductance in the giant axon of Loligo
Authors: Alan L. Hodgkin and Andrew F. Huxley
Journal: Journal of Physiology
Year: 1952
Abstract: This paper contains a further account of the electrical properties of the giant axon of Loligo. It deals with the 'inactivation' process which gradually reduces sodium permeability after it has undergone the initial rise associated with depolarization. Experiments described previously (Hodgkin & Huxley, 1952a, b) show that the sodium conductance always declines from its initial maximum, but they leave a number of important points unresolved. Thus they give no information about the rate at which repolarization restores the ability of the membrane to respond with its characteristic increase of sodium conductance. Nor do they provide much quantitative evidence about the influence of membrane potential on the process responsible for inactivation. These are the main problems with which this paper is concerned. The experimental method needs no special description, since it was essentially the same as that used previously (Hodgkin, Huiley & Katz, 1952; Hodgkin & Huxley, 1952b).
A quantitative description of membrane current and its application to conduction and excitation in nerve
Authors: Alan L. Hodgkin and Andrew F. Huxley
Journal: Journal of Physiology
Year: 1952
Abstract: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre (Hodgkin, Huxley & Katz, 1952; Hodgkin & Huxley, 1952 a-c). Its general object is to discuss the results of the preceding papers (Part I), to put them into mathematical form (Part II) and to show that they will account for conduction and excitation in quantitative terms (Part III).
Class Date: April 15th, 2020
Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs
Authors: Henry Markram, Joachim Lubke, Michael Frotscher, and Bert Sakmann
Journal: Science
Year: 1997
Group: Delta
Abstract: Activity-driven modifications in synaptic connections between neurons in the neocortex may occur during development and learning. In dual whole-cell voltage recordings from pyramidal neurons, the coincidence of postsynaptic action potentials (APs) and unitary excitatory postsynaptic potentials (EPSPs) was found to induce changes in EPSPs. Their average amplitudes were differentially up- or down-regulated, depending on the precise timing of postsynaptic APs relative to EPSPs. These observations suggest that APs propagating back into dendrites serve to modify single active synaptic connections, depending on the pattern of electrical activity in the pre- and postsynaptic neurons.
Hebbian STDP in mushroom bodies facilitates the synchronous flow of olfactory information in locusts
Authors: Stijn Cassenaer and Gilles Laurent
Journal: Nature
Year: 2007
Group: Zeta
Abstract: Odour representations in insects undergo progressive transformations and decorrelation from the receptor array to the presumed site of odour learning, the mushroom body. There, odours are represented by sparse assemblies of Kenyon cells in a large population. Using intracellular recordings in vivo, we examined transmission and plasticity at the synapse made by Kenyon cells onto downstream targets in locusts. We find that these individual synapses are excitatory and undergo hebbian spike-timing dependent plasticity (STDP) on a +/-25 ms timescale. When placed in the context of odour-evoked Kenyon cell activity (a 20-Hz oscillatory population discharge), this form of STDP enhances the synchronization of the Kenyon cells’ targets and thus helps preserve the propagation of the odour-specific codes through the olfactory system.
Conditional modulation of spike-timing-dependent plasticity for olfactory learning
Authors: Stijn Cassenaer and Gilles Laurent
Journal: Nature
Year: 2012
Group: Epsilon
Abstract: Mushroom bodies are a well-known site for associative learning in insects. Yet the precise mechanisms that underlie plasticity there and ensure their specificity remain elusive. In locusts, the synapses between the intrinsic mushroom body neurons and their postsynaptic targets obey a Hebbian spike-timing-dependent plasticity (STDP) rule. Although this property homeostatically regulates the timing of mushroom body output, its potential role in associative learning is unknown. Here we show in vivo that pre–post pairing causing STDP can, when followed by the local delivery of a reinforcement-mediating neuromodulator, specify the synapses that will undergo an associative change. At these synapses, and there only, the change is a transformation of the STDP rule itself. These results illustrate the multiple actions of STDP, including a role in associative learning, despite potential temporal dissociation between the pairings that specify synaptic modification and the delivery of reinforcement-mediating neuromodulator signals.
Phase relationship between hippocampal place units and the EEG theta rhythm
Authors: John O'Keefe and Michael L. Recce
Journal: Hippocampus
Year: 1993
Group: Epsilon
Abstract: Many complex spike cells in the hippocampus of the freely moving rat have as their primary correlate the animal's location in an environment (place cells). In contrast, the hippocampal electroer cephalograph theta pattern of rhythmical waves (7–12 Hz) is better correlated with a class of movements that change the rat's location in an environment. During movement through the place field, the complex spike cells often fire in a bursting pattern with an interburst frequency in the same range as the concurrent electroencephalograph theta. The present study examined the phase of the theta wave at which the place cells fired. It was found that firing consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This precession of the phase ranged from 100° to 355° in different cells. The effect appeared to be due to the fact that individual cells had a higher interburst rate than the theta frequency. The phase was highly correlated with spatial location and less well correlated with temporal aspects of behavior, such as the time after place field entry. These results have implications for several aspects of hippocampal function. First, by using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding. Second, the characteristics of the phase shift constrain the models that define the construction of place fields. Third, the results restrict the temporal and spatial circumstances under which synapses in the hippocampus could be modified.
Large environments reveal the statistical structure governing hippocampal representations
Authors: P. Dylan Rich, Hua-Peng Liaw, and Albert K. Lee
Journal: Science
Year: 2014
Group: Zeta
Abstract: The rules governing the formation of spatial maps in the hippocampus have not been determined. We investigated the large-scale structure of place field activity by recording hippocampal neurons in rats exploring a previously unencountered 48-meter-long track. Single-cell and population activities were well described by a two-parameter stochastic model. Individual neurons had their own characteristic propensity for forming fields randomly along the track, with some cells expressing many fields and many exhibiting few or none. Because of the particular distribution of propensities across cells, the number of neurons with fields scaled logarithmically with track length over a wide, ethological range. These features constrain hippocampal memory mechanisms, may allow efficient encoding of environments and experiences of vastly different extents and durations, and could reflect general principles of population coding.
Behavioral time scale synaptic plasticity underlies CA1 place fields
Authors: Katie C. Bittner, Aaron D. Milstein, Christine Grienberger, Sandro Romani, and Jeffrey C. Magee
Journal: Science
Year: 2017
Group: Delta
Abstract: Learning is primarily mediated by activity-dependent modifications of synaptic strength within neuronal circuits. We discovered that place fields in hippocampal area CA1 are produced by a synaptic potentiation notably different from Hebbian plasticity. Place fields could be produced in vivo in a single trial by potentiation of input that arrived seconds before and after complex spiking. The potentiated synaptic input was not initially coincident with action potentials or depolarization.This rule, named behavioral time scale synaptic plasticity, abruptly modifies inputs that were neither causal nor close in time to postsynaptic activation. In slices, five pairings of subthreshold presynaptic activity and calcium (Ca2+) plateau potentials produced a large potentiation with an asymmetric seconds-long time course. This plasticity efficiently stores entire behavioral sequences within synaptic weights to produce predictive place cell activity.
Class Date: April 29th, 2020
Reverse replay of behavioural sequences in hippocampal place cells during the awake state
Authors: David J. Foster and Matthew A. Wilson
Journal: Nature
Year: 2006
Group: Zeta
Abstract: The hippocampus has long been known to be involved in spatial navigational learning in rodents, and in memory for events in rodents, primates and humans. A unifying property of both navigation and event memory is a requirement for dealing with temporally sequenced information. Reactivation of temporally sequenced memories for previous behavioural experiences has been reported in sleep in rats. Here we report that sequential replay occurs in the rat hippocampus during awake periods immediately after spatial experience. This replay has a unique form, in which recent episodes of spatial experience are replayed in a temporally reversed order. This replay is suggestive of a role in the evaluation of event sequences in the manner of reinforcement learning models. We propose that such replay might constitute a general mechanism of learning and memory.
Awake hippocampal sharp-wave ripples support spatial memory
Authors: Shantanu P. Jadhav, Caleb Kemere, P. Walter German, and Loren M. Frank
Journal: Science
Year: 2012
Group: Delta
Abstract: The hippocampus is critical for spatial learning and memory. Hippocampal neurons in awake animals exhibit place field activity that encodes current location, and sharp-wave ripple (SWR) activity during which representations based on past experiences are often replayed. The relationship between these patterns of activity and the memory functions of the hippocampus is poorly understood. We interrupted awake SWRs in animals learning a spatial alternation task. We observed a specific learning and performance deficit that persisted throughout training. This deficit was associated with awake SWR activity as SWR interruption left place field activity and post-experience SWR reactivation intact. These results provide a link between awake SWRs and hippocampal memory processes, and suggest that awake replay of memory-related information during SWRs supports learning and memory-guided decision-making.
Slow waves, sharp waves, ripples, and REM in sleeping dragons
Authors: Mark Shein-Idelson, Janie M. Ondracek, Hua-Peng Liaw, Sam Reiter, Gilles Laurent
Journal: Science
Year: 2016
Group: Epsilon
Abstract: Sleep has been described in animals ranging from worms to humans. Yet the electrophysiological characteristics of brain sleep, such as slow-wave (SW) and rapid eye movement (REM) activities, are thought to be restricted to mammals and birds. Recording from the brain of a lizard, the Australian dragon Pogona vitticeps, we identified SW and REM sleep patterns, thus pushing back the probable evolution of these dynamics at least to the emergence of amniotes. The SW and REM sleep patterns that we observed in lizards oscillated continuously for 6 to 10 hours with a period of ~80 seconds. The networks controlling SW-REM antagonism in amniotes may thus originate from a common, ancient oscillator circuit. Lizard SW dynamics closely resemble those observed in rodent hippocampal CA1, yet they originate from a brain area, the dorsal ventricular ridge, that has no obvious hodological similarity with the mammalian hippocampus.
Selective suppression of hippocampal ripples impairs spatial memory
Authors: Gabrielle Girardeau, Karim Benchenane, Sidney I. Wiener, György Buzsáki, and Michaël B. Zugaro
Journal: Nature Neuroscience
Year: 2009
Abstract: Sharp wave–ripple (SPW-R) complexes in the hippocampus-entorhinal cortex are believed to be important for transferring labile memories from the hippocampus to the neocortex for long-term storage. We found that selective elimination of SPW-Rs during post-training consolidation periods resulted in performance impairment in rats trained on a hippocampus-dependent spatial memory task. Our results provide evidence for a prominent role of hippocampal SPW-Rs in memory consolidation.
The REM sleep–memory consolidation hypothesis
Authors: Jemore M. Siegel
Journal: Science
Year: 2001
Group: Epsilon
Abstract: It has been hypothesized that REM (rapid eye movement) sleep has an important role in memory consolidation. The evidence for this hypothesis is reviewed and found to be weak and contradictory. Animal studies correlating changes in REM sleep parameters with learning have produced inconsistent results and are confounded by stress effects. Humans with pharmacological and brain lesion–induced suppression of REM sleep do not show memory deficits, and other human sleep-learning studies have not produced consistent results. The time spent in REM sleep is not correlated with learning ability across humans, nor is there a positive relation between REM sleep time or intensity and encephalization across species. Although sleep is clearly important for optimum acquisition and performance of learned tasks, a major role in memory consolidation is unproven.
Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep
Authors: Kenway Louie and Matthew A. Wilson
Journal: Neuron
Year: 2001
Group: Delta
Abstract: Human dreaming occurs during rapid eye movement (REM) sleep. To investigate the structure of neural activity during REM sleep, we simultaneously recorded the activity of multiple neurons in the rat hippocampus during both sleep and awake behavior. We show that temporally sequenced ensemble firing rate patterns reflecting tens of seconds to minutes of behavioral experience are reproduced during REM episodes at an equivalent timescale. Furthermore, within such REM episodes behavior-dependent modulation of the subcortically driven theta rhythm is also reproduced. These results demonstrate that long temporal sequences of patterned multineuronal activity suggestive of episodic memory traces are reactivated during REM sleep. Such reactivation may be important for memory processing and provides a basis for the electrophysiological examination of the content of dream states.
Control of REM sleep by ventral medulla GABAergic neurons
Authors: Franz Weber, Shinjae Chung, Kevin T. Beier, Min Xu, Liqun Luo, and Yang Dan
Journal: Nature
Year: 2015
Group: Zeta
Abstract: Rapid eye movement (REM) sleep is a distinct brain state characterized by activated electroencephalogram and complete skeletal muscle paralysis, and is associated with vivid dreams. Transection studies by Jouvet first demonstrated that the brainstem is both necessary and sufficient for REM sleep generation, and the neural circuits in the pons have since been studied extensively. The medulla also contains neurons that are active during REM sleep, but whether they play a causal role in REM sleep generation remains unclear. Here we show that a GABAergic (γ-aminobutyric-acid-releasing) pathway originating from the ventral medulla powerfully promotes REM sleep in mice. Optogenetic activation of ventral medulla GABAergic neurons rapidly and reliably initiated REM sleep episodes and prolonged their durations, whereas inactivating these neurons had the opposite effects. Optrode recordings from channelrhodopsin-2-tagged ventral medulla GABAergic neurons showed that they were most active during REM sleep (REMmax), and during wakefulness they were preferentially active during eating and grooming. Furthermore, dual retrograde tracing showed that the rostral projections to the pons and midbrain and caudal projections to the spinal cord originate from separate ventral medulla neuron populations. Activating the rostral GABAergic projections was sufficient for both the induction and maintenance of REM sleep, which are probably mediated in part by inhibition of REM-suppressing GABAergic neurons in the ventrolateral periaqueductal grey. These results identify a key component of the pontomedullary network controlling REM sleep. The capability to induce REM sleep on command may offer a powerful tool for investigating its functions.
The function of dream sleep
Authors: Francis Crick and Graeme Mitchison
Journal: Nature
Year: 1983
Abstract: We propose that the function of dream sleep (more properly rapid-eye movement or REM sleep) is to remove certain undesirable modes of interaction in networks of cells in the cerebral cortex. We postulate that this is done in REM sleep by a reverse learning mechanism, so that the trace in the brain of the unconscious dream is weakened, rather than strengthened, by the dream.
Neural networks and physical systems with emergent collective computational abilities
Authors: John J. Hopfield
Journal: Proceedings of the National Academy of Sciences
Year: 1982
Group: Epsilon
Abstract: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
Synaptic mechanisms of pattern completion in the hippocampal CA3 network
Authors: Segundo Jose Guzman, Alois Schlögl, Michael Frotscher, and Peter Jonas
Journal: Science
Year: 2016
Group: Zeta
Abstract: The hippocampal CA3 region plays a key role in learning and memory. Recurrent CA3–CA3 synapses are thought to be the subcellular substrate of pattern completion. However, the synaptic mechanisms of this network computation remain enigmatic. To investigate these mechanisms, we combined functional connectivity analysis with network modeling. Simultaneous recording from up to eight CA3 pyramidal neurons revealed that connectivity was sparse, spatially uniform, and highly enriched in disynaptic motifs (reciprocal, convergence, divergence, and chain motifs). Unitary connections were composed of one or two synaptic contacts, suggesting efficient use of postsynaptic space. Real-size modeling indicated that CA3 networks with sparse connectivity, disynaptic motifs, and single-contact connections robustly generated pattern completion. Thus, macro- and microconnectivity contribute to efficient memory storage and retrieval in hippocampal networks.
Attractor dynamics in the hippocampal representation of the local environment
Authors: Tom J. Wills, Colin Lever, Francesca Cacucci, Neil Burgess, John O'Keefe
Journal: Science
Year: 2005
Group: Delta
Abstract: Memories are thought to be attractor states of neuronal representations, with the hippocampus a likely substrate for context-dependent episodic memories. However, such states have not been directly observed. For example, the hippocampal place cell representation of location was previously found to respond continuously to changes in environmental shape alone. We report that exposure to novel square and circular environments made of different materials creates attractor representations for both shapes: Place cells abruptly and simultaneously switch between representations as environmental shape changes incrementally. This enables study of attractor dynamics in a cognitive representation and may correspond to the formation of distinct contexts in context-dependent memory.
Traveling waves in developing cerebellar cortex mediated by asymmetrical Purkinje cell connectivity
Authors: Alanna J. Watt, Hermann Cuntz, Masahiro Mori, Zoltan Nusser, P. Jesper Sjöström, and Michael Häusser
Journal: Nature Neuroscience
Year: 2009
Group: Epsilon
Abstract: Correlated network activity is important in the development of many neural circuits. Purkinje cells are among the first neurons to populate the cerebellar cortex, where they sprout exuberant axon collaterals. We used multiple patch-clamp recordings targeted with two-photon microscopy to characterize monosynaptic connections between the Purkinje cells of juvenile mice. We found that Purkinje cell axon collaterals projected asymmetrically in the sagittal plane, directed away from the lobule apex. On the basis of our anatomical and physiological characterization of this connection, we constructed a network model that robustly generated waves of activity that traveled along chains of connected Purkinje cells. Consistent with the model, we observed traveling waves of activity in Purkinje cells in sagittal slices from young mice that require GABAA receptor–mediated transmission and intact Purkinje cell axon collaterals. These traveling waves are absent in adult mice, suggesting they have a developmental role in wiring the cerebellar cortical microcircuit.
Hippocampal theta oscillations are travelling waves
Authors: Evgueniy V. Lubenov and Athanassios G. Siapas
Journal: Nature
Year: 2009
Group: Delta
Abstract: Theta oscillations clock hippocampal activity during awake behaviour and rapid eye movement (REM) sleep. These oscillations are prominent in the local field potential, and they also reflect the subthreshold membrane potential and strongly modulate the spiking of hippocampal neurons. The prevailing view is that theta oscillations are synchronized throughout the hippocampus, despite the lack of conclusive experimental evidence. In contrast, here we show that in freely behaving rats, theta oscillations in area CA1 are travelling waves that propagate roughly along the septotemporal axis of the hippocampus. Furthermore, we find that spiking in the CA1 pyramidal cell layer is modulated in a consistent travelling wave pattern. Our results demonstrate that theta oscillations pattern hippocampal activity not only in time, but also across anatomical space. The presence of travelling waves indicates that the instantaneous output of the hippocampus is topographically organized and represents a segment, rather than a point, of physical space.
The sleep slow oscillation as a traveling wave
Authors: Marcello Massimini, Reto Huber, Fabio Ferrarelli, Sean Hill and Giulio Tononi
Journal: Journal of Neuroscience
Year: 2004
Group: Zeta
Abstract: During much of sleep, virtually all cortical neurons undergo a slow oscillation (1 Hz) in membrane potential, cycling from a hyperpolarized state of silence to a depolarized state of intense firing. This slow oscillation is the fundamental cellular phenomenon that organizes other sleep rhythms such as spindles and slow waves. Using high-density electroencephalogram recordings in humans, we show here that each cycle of the slow oscillation is a traveling wave. Each wave originates at a definite site and travels over the scalp at an estimated speed of 1.2-7.0 m/sec. Waves originate more frequently in prefrontal-orbitofrontal regions and propagate in an anteroposterior direction. Their rate of occurrence increases progressively reaching almost once per second as sleep deepens. The pattern of origin and propagation of sleep slow oscillations is reproducible across nights and subjects and provides a blueprint of cortical excitability and connectivity. The orderly propagation of correlated activity along connected pathways may play a role in spike timing-dependent synaptic plasticity during sleep.
Class Date: May 27th, 2020
Cortical and thalamic cellular correlates of electroencephalographic burst-suppression
Authors: M. Steriade, F. Amzica, and D. Contreras
Journal: Electroencephalography and Clinical Neurophysiology
Year: 1994
Group: Epsilon
Abstract: This experimental study on anesthetized cats used intracellular recordings Of cortical, thalamocortical and reticular thalamic neurons (n = 54), as well as multi-site extracellular recordings (n = 36), to investigate the cellular correlates of EEG burst-suppression patterns, defined as alternating wave bursts and periods of electrical silence. Burst-suppression was elicited by the administration of the same or other anesthetic agents upon the background of an already synchronized EEG activity.
Recovery of consciousness is mediated by a network of discrete metastable activity states
Authors: Andrew E. Hudson, Diany Paola Calderon, Donald W. Pfaff, and Alex Proekt
Journal: Proceedings of the National Academy of Sciences
Year: 2014
Group: Delta
Abstract: It is not clear how, after a large perturbation, the brain explores the vast space of potential neuronal activity states to recover those compatible with consciousness. Here, we analyze recovery from pharmacologically induced coma to show that neuronal activity en route to consciousness is confined to a low-dimensional subspace. In this subspace, neuronal activity forms discrete metastable states persistent on the scale of minutes. The network of transitions that links these metastable states is structured such that some states form hubs that connect groups of otherwise disconnected states. Although many paths through the network are possible, to ultimately enter the activity state compatible with consciousness, the brain must first pass through these hubs in an orderly fashion. This organization of metastable states, along with dramatic dimensionality reduction, significantly simplifies the task of sampling the parameter space to recover the state consistent with wakefulness on a physiologically relevant timescale.
A common neuroendocrine substrate for diverse general anesthetics and sleep
Authors: Li-Feng Jiang-Xie, Luping Yin, Shengli Zhao, Vincent Prevosto, Bao-Xia Han, Kafui Dzirasa, Fan Wang
Journal: Neuron
Year: 2019
Group: Zeta
Abstract: How general anesthesia (GA) induces loss of consciousness remains unclear, and whether diverse anesthetic drugs and sleep share a common neural pathway is unknown. Previous studies have revealed that many GA drugs inhibit neural activity through targeting GABA receptors. Here, using Fos staining, ex vivo brain slice recording, and in vivo multi-channel electrophysiology, we discovered a core ensemble of hypothalamic neurons in and near the supraoptic nucleus, consisting primarily of neuroendocrine cells, which are persistently and commonly activated by multiple classes of GA drugs. Remarkably, chemogenetic or brief optogenetic activations of these anesthesia-activated neurons (AANs) strongly promote slow-wave sleep and potentiates GA, whereas conditional ablation or inhibition of AANs led to diminished slow-wave oscillation, significant loss of sleep, and shortened durations of GA. These findings identify a common neural substrate underlying diverse GA drugs and natural sleep and reveal a crucial role of the neuroendocrine system in regulating global brain states.
(Optional) Consciousness and anesthesia
Authors: Michael T. Alkire, Anthony G. Hudetz, Giulio Tononi
Journal: Science
Year: 2008
Abstract: When we are anesthetized, we expect consciousness to vanish. But does it always? Although anesthesia undoubtedly induces unresponsiveness and amnesia, the extent to which it causes unconsciousness is harder to establish. For instance, certain anesthetics act on areas of the brain's cortex near the midline and abolish behavioral responsiveness, but not necessarily consciousness. Unconsciousness is likely to ensue when a complex of brain regions in the posterior parietal area is inactivated. Consciousness vanishes when anesthetics produce functional disconnection in this posterior complex, interrupting cortical communication and causing a loss of integration; or when they lead to bistable, stereotypic responses, causing a loss of information capacity. Thus, anesthetics seem to cause unconsciousness when they block the brain's ability to integrate information.
Class Date: June 3rd, 2020
Vector-based navigation using grid-like representations in artificial agents
Authors: Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J. Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beattie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hassabis, Raia Hadsell, and Dharshan Kumaran
Journal: Nature
Year: 2018
Group: Epsilon
Abstract: Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex. Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space and is critical for integrating self-motion (path integration) and planning direct trajectories to goals (vector-based navigation). Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types. We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments—optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation, demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.
A distributional code for value in dopamine-based reinforcement learning
Authors: Will Dabney, Zeb Kurth-Nelson, Naoshige Uchida, Clara Kwon Starkweather, Demis Hassabis, Rémi Munos, and Matthew Botvinick
Journal: Nature
Year: 2020
Group: Zeta
Abstract: Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain. According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning. We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.
Backpropagation and the brain
Authors: Timothy P. Lillicrap, Adam Santoro, Luke Marris, Colin J. Akerman, and Geoffrey Hinton
Journal: Nature Reviews Neuroscience
Year: 2020
Group: Delta
Abstract: During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embedded within multilayered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. The backpropagation algorithm solves this problem in deep artificial neural networks, but historically it has been viewed as biologically problematic. Nonetheless, recent developments in neuroscience and the successes of artificial neural networks have reinvigorated interest in whether backpropagation offers insights for understanding learning in the cortex. The backpropagation algorithm learns quickly by computing synaptic updates using feedback connections to deliver error signals. Although feedback connections are ubiquitous in the cortex, it is difficult to see how they could deliver the error signals required by strict formulations of backpropagation. Here we build on past and recent developments to argue that feedback connections may instead induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
(Optional) Deep learning
Authors: Yann LeCun, Yoshua Bengio, and Geoffrey Hinton
Journal: Nature
Year: 2015
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech rec-ognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Class Date: June 10th, 2020 Canceled