Summary: Findings shed light on how plastic and stable neural populations are able to co-exist in the brain.
Source: University of Cambridge
Our brains are highly skilled at learning patterns in the world and making sense of them. The brain continually learns and adapts throughout our lives, and even the neurons supporting learned behaviors, such as the daily walk to work, are constantly changing.
This “representational drift” occurs without any obvious change in behavior or task performance. Everything seems routine and stable, i.e., you follow the same path to work, make the same plan and take the same steps, but all the while, patterns of neural activity in certain parts of the brain are changing.
A new study, published in the journal PNAS, proposes how the brain stays stable despite changes in the neural code.
Cambridge neuroscientists and study co-authors Dr. Michael E. Rule and Dr. Timothy O’Leary, argue that neurons (the cells that make your brain work) can detect when some of their inputs change, and adjust the strength of influence that one neuron has on another, in order to compensate, thus supporting a form of internal learning.
“These changes in the neural code bear similarities to how languages change gradually over time, while faithfully communicating common ideas and concepts,” says Dr. Rule, a Leverhulme Early Career Fellow in the Department of Engineering.
While some parts of the brain are plastic, and change rapidly, other parts show long-term stability. So how do neural circuits talk to each other without continuously having to re-learn the things that they have already learned? Even brain-machine interfaces—which are increasingly being used as assisted living devices for people with cognitive or physical impairments—must contend with “drift.”
The researchers argue that homeostatic processes within single cells can help the brain to “watch itself” as it changes, and that internally-generated signals help stable neural populations “learn” how to track the unstable ones. The researchers made this conjecture based on modeling and on data/observations of living brain activity.
Just how engineers are currently developing machine learning algorithms to track neural representations as they change—automatically—the researchers propose that something similar to these algorithms could also be at work in the brain, emerging from well-known learning rules and homeostatic processes.
“This might explain how plastic and stable neural populations are able to coexist in the brain,” said Dr. Rule. “We already know that ‘representational drift’ happens in the hippocampus—the part of the brain that has a major role in learning and memory—and seems to happen in the parietal cortex—the area responsible for sensory perception and integration. What we propose are several specific mechanisms that could help make this plasticity compatible with long-term stability through the brain.”
Dr. O’Leary, Associate Professor in the Department of Engineering, said the study emphasizes the idea that “drift” may arise from continual learning.
“There is a huge unanswered challenge in artificial intelligence, namely the problem of building algorithms that can learn continually without corrupting previously learned information,” he said.
“The brain manifestly achieves this, and this work is a step in the direction of finding algorithms that can do the same.”
About this neuroscience research news
Author: Press Office
Source: University of Cambridge
Contact: Press Office – University of Cambridge
Image: The image is credited to Michael E. Rule
Original Research: Open access.
“Self-healing codes: How stable neural populations can track continually reconfiguring neural representations” by Michael E. Rule et al. PNAS
Abstract
See also
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations
As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time.
Such “representational drift” raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback.
We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift.
This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Credit: Source link