Success Story

January 25, 2018

Learning from a predator… about learning. Two CBC awardees, Stephanie Palmer and Jason MacLean, UChicago, dissect the process of visual prediction in a hungry salamander

Congratulations to Stephanie Palmer and Jason MacLean, UChicago, for their recent publication in PNAS, titled “Learning to make external sensory stimulus predictions using internal correlations in populations of neurons.” Palmer and MacLean are co-recipients of a CBC Catalyst Award (2015), for the project: “Reading the cortical code for natural motion.” In addition, since 2017, Palmer has served on the CBC Catalyst Review Board (CRB).


Learning to make visual predictions with a few simple rules

UChicago Medicine, THE FOREFRONT   |   by Matt Wood   |   January 19, 2018


Tiger Salamander (Ambystoma tigrinum). Photo: Caitlin Smith/USFWS

A tiger salamander sits submerged in a pond, waiting for its lunch. It sees a fish swimming nearby and tracks its movements, closer and closer until—snap—it lunges and snatches its prey. What seems like pure reflex or instinct actually involves a lot of visual and mental calculations to track how fast the fish is moving and in which direction, so the salamander can predict accurately when and where to strike.

Some of this processing takes place in the retina, the layer of light-sensitive cells at the back of the eye. More calculations are done in the brain after the retina passes information down the optic nerve to parts of the brain that don’t even get to “see” the visual input. This takes time, too: there can be a 50- to 80-millisecond delay for processing in the retina, then more time to generate the right movements in response. Yet somehow it all works. Salamanders are really good at catching prey.

A new study by neuroscientists at the University of Chicago suggests how this process works so well. Using a few simple rules about how to connect with each other, brain cells can learn to predict movements just as efficiently as a highly optimized software algorithm, based solely on the information passed to them by the retina.

Stephanie Palmer, PhD

Stephanie Palmer, PhD, UChicago

“The retina is not just a camera, it’s thinking. It compresses things, and when it does that it saves information that’s useful for predicting movement,” said Stephanie Palmer, assistant professor of organismal biology and anatomy and senior author of the new study, published in the Proceedings of the National Academy of Sciences.

“We showed that you can read out that complex code from the retina efficiently without adding many layers of processing. Instead, a few simple, biologically plausible rules can do it,” she said.

Palmer and Audrey Sederberg, a former UChicago postdoctoral scholar now at Georgia Tech, and Jason MacLean, associate professor of neurobiology at UChicago, used data from a 2015 study by Palmer when she was a postdoc at Princeton. In that study, Palmer dissected the retina from salamanders and fixed them to a plate where they could be exposed to different visual stimuli, either an animation of a moving bar or a video of a natural environment. The neurons in the retina are laid out in a grid and fire on/off when something moves across the field of vision. Palmer was able to record their output or “spikes,” along with the coded prediction information.

Jason MacLean, PhD

Jason MacLean, PhD, UChicago

In the new study, the researchers took this data and fed it to a model of a small network of brain cells. Cells form networks among themselves as they process inputs based on simple rules that have long been observed in nature. For example, if cell A sends a message to cell B, the connection between them strengthens. But if cell B fires first, the connection weakens.

Using real activity captured from the salamander retinas, the team showed that a network of cells based on these basic, biology-based learning rules was able to read out the prediction information without additional feedback about whether or not the prediction was accurate. Moreover, this is just as efficient as using specially designed algorithms built on high-performance computer systems.

“We’re using the same learning tools that the brain has, instead of technical tools we could cook up from computer science,” Palmer said. “What’s surprising is that it all works and that you can drive yourself to the best solution that way.”

Next, the team wants to apply the findings from this study to longer time frames (think: not just the salamander’s final lunge to catch the fish, but tracking it from farther away and creeping into position first). Smaller circuits of cells built with these rules could be wired together into more powerful, recurrent networks to process more complex predictive information.

“Generic recurrent networks could be useful for linking together patterns in time, but it’d be particularly interesting if such circuits could be implemented through dendritic processing or some other known biological network,” Sederberg said.

And, as the new study suggests, taking cues from biology often leads to the most elegant result.

“Biology usually finds the optimal solution, which is really incredible, frankly,” MacLean said. “This just shows the capability of incorporating small aspects of what we know about the brain into a model to do very sophisticated computation. It hints at the potential of leveraging that in increasingly powerful ways as we learn more and more about the brain.”


Source:

Adapted (with modifications) from UChicago Medicine, THE FOREFRONT, by Matt Wood, posted on January 19, 2018.

Citation:

Sederberg AJ, MacLean JN, Palmer SE. Learning to make external sensory stimulus predictions using internal correlations in populations of neurons. Proc Natl Acad Sci U S A. 2018 Jan 18. [Epub ahead of print] (PubMed)


See also:

 
CBC Awards:

CBC Catalyst Award (2015):
PIs: Jason MacLean and Stephanie Palmer (UChicago) and David Schwab (NU) for the project:
▸ Reading the cortical code for natural motion

 
CBC BOARDS:

▸ CBC Catalyst Review Board (CRB)
Stephanie PalmerMember (2017-present)