Keeping information in mind can mean storing it between synapses

Keeping information in mind can mean storing it between synapses

Keeping information in mind can mean storing it between synapses

Overview: Findings support the modern idea that neural networks store information by making short-term changes at synapses. The study sheds new light on short-term synaptic plasticity in recent memory storage.

Source: Picower Institute for Learning and Memory

Between the time you read the Wi-Fi password from the café’s menu board and the time you can get back to your laptop and enter it, keep it in mind. If you’ve ever wondered how your brain does that, ask a question about working memory that researchers have been striving to explain for decades. Now MIT neuroscientists have published an important new insight to explain how it works.

In a study in PLOS Computational Biology, scientists at the Picower Institute for Learning and Memory compared measurements of brain cell activity in an animal performing a working memory task with the output of several computer models that represent two theories of the underlying mechanism for holding information in mind.

The results argued strongly for the newer idea that a network of neurons stores the information by making momentary changes to the pattern of their connections, or synapses, and contradicted the traditional alternative that memory is maintained by neurons remaining persistently active. (such as an engine running at idle). ).

While both models allowed information to be held in mind, only the versions that allowed synapses to temporarily change connections (“short-term synaptic plasticity”) produced neural activity patterns that mimicked what was actually observed in real brains at work.

The idea that brain cells maintain memories by always being “on” may be simpler, acknowledged senior author Earl K. Miller, but it does not represent what nature does and cannot produce the sophisticated flexibility of thought that can come from intermittent neural activity supported by short-term synaptic plasticity.

“You need these kinds of mechanisms to give working memory activity the freedom it needs to be flexible,” said Miller, Picower Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS). “If working memory were just sustained activity, it would be as simple as a light switch. But working memory is just as complex and dynamic as our thoughts.”

Co-lead author Leo Kozachkov, who received his PhD from MIT in November for theoretical modeling work including this study, said matching computer models with real-world data was crucial.

“Most people think that working memory ‘happens’ in neurons – sustained neural activity leads to persistent thoughts. However, this view has recently come under scrutiny because it doesn’t really agree with the data,” said Kozachkov, who was co-supervised by co-senior author Jean-Jacques Slotine, a professor of BCS and mechanical engineering.

“Using artificial neural networks with short-term synaptic plasticity, we show that synaptic activity (rather than neural activity) can be a substrate for working memory. The important takeaway from our paper is: these ‘plastic’ neural network models are more brain-like, in a quantitative sense, and also have additional functional benefits in terms of robustness.

Models match nature

Along with co-lead author John Tauber, a graduate student at MIT, Kozachkov’s goal was not just to determine how working memory information can be held in mind, but to shed light on how nature actually does it. That meant we had to start with measurements of the “ground truth” of the electrical “spiking” activity of hundreds of neurons in an animal’s prefrontal cortex as it played a working memory game. In each of the many rounds, the animal was shown an image which then disappeared.

A second later it would see two images including the original and had to view the original to earn a small reward. The key moment is that intervening second, the ‘lag period’, during which the image prior to the test must be borne in mind.

The team consistently observed what the Miller lab has seen many times before: the neurons peak a lot when seeing the original image, peak only intermittently during the delay, and then peak again when the images need to be recalled during the test (this one dynamics is determined by an interplay of beta and gamma frequency brain rhythms). In other words, spiking is strong when information needs to be primarily stored and when it needs to be recalled, but sporadic when it needs to be maintained. The peaks are not sustained during the delay.

In addition, the team trained software decoders to read out the working memory information from the peak activity measurements. They were very accurate when the spike was high, but not when it was low, as in the lag period. This suggested that spikes do not represent information during the delay. But that raised a crucial question: If spikes don’t keep information in mind, what does?

Researchers, including Mark Stokes of the University of Oxford, have proposed that changes in the relative strengths or “weights” of synapses could store the information instead. The MIT team put that idea to the test by computationally modeling neural networks that embody two versions of each main theory. As with the real animal, the machine learning networks were trained to perform the same working memory task and perform neural activity that could also be interpreted by a decoder.

Keeping information in mind can mean storing it between synapses
While both models allowed information to be held in mind, only the versions that allowed synapses to temporarily change connections (“short-term synaptic plasticity”) produced neural activity patterns that mimicked what was actually observed in real brains at work. The image is in the public domain

The result is that the computational networks that enabled short-term synaptic plasticity to encode information peaked when the brain proper peaked and not when it didn’t. The networks with constant spikes as the method of preserving memory spiked all the time, even when the natural brain didn’t. And the decoder results revealed that accuracy decreased during the delay period in the synaptic plasticity models, but remained unnaturally high in the persistent spike models.

In another layer of analysis, the team created a decoder to read information from the synaptic weights. They found that during the lag period, the synapses represented working memory information that the spiking did not.

Of the two model versions with short-term synaptic plasticity, the most realistic was called “PS-Hebb,” which has a negative feedback loop that keeps the neural network stable and robust, Kozachkov said.

working memory

In addition to being more closely aligned with nature, the synaptic plasticity models offered other benefits that are likely to be of interest to real brains. One was that the plasticity models retained information in their synaptic weightings even after as many as half of the artificial neurons were “ablated.”

The persistent activity models broke down after losing only 10-20 percent of their synapses. And, Miller added, peaking only occasionally takes less energy than peaking persistently.

In addition, Miller said, rapid bursts of spikes rather than sustained spikes allow space in time to store more than one item in memory. Research has shown that people can hold up to four different things in working memory.

Also see

This shows a collection of avatars

Miller’s lab is planning new experiments to determine whether intermittent spiking and synaptic weight-based information storage models properly match real neural data when animals need to keep multiple things in mind instead of just one image.

Besides Miller, Kozachkov, Tauber and Slotine, the other authors of the article are Mikael Lundqvist and Scott Brincat.

financing: The Office of Naval Research, the JPB Foundation and ERC and VR Starting Grants funded the research.

About this synaptic plasticity research news

Writer: David Orenstein
Source: Picower Institute for Learning and Memory
Contact: David Orenstein – Picower Institute for Learning and Memory
Image: The image is in the public domain

Original research: Open access.
Robust and brain-like working memory through short-term synaptic plasticityby Earl K. Miller et al. PLOS Computational Biology


Abstract

Robust and brain-like working memory through short-term synaptic plasticity

Working memory was long thought to arise from a persistent spiking/attractor dynamic. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states across gaps in time with few or no spikes.

To determine whether STSP provides additional functional benefits, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were able to retain memories despite distractors presented in the middle of the memory delay.

However, RNNs with STSP showed activity similar to that in the cortex of a non-human primate (NHP) performing the same task. In contrast, RNNs without STSP showed activity that was less brain-like. Furthermore, RNNs with STSP were more robust to network degradation than RNNs without STSP.

These results show that STSP can not only help preserve working memories, but also make neural networks more robust and brain-like.



#Keeping #information #mind #storing #synapses

Leave a Comment

Your email address will not be published. Required fields are marked *