Increased learning and memory through neurogenesis has a likely upper limit: a Q&A with Neurocrates

Posted by

This post is a question-and-answer conversation with an imaginary scholar named Neurocrates.

It uses the Neurotic Method.

Stephen: Hello, Neurocrates.

Neurocrates: You named me that just to make a pun about methodology, didn’t you?

S: Maybe. Thank you for visiting here today. I understand that you study neurogenesis and its effects on cognitive performance.

N: Yes. Take what I say with a grain of salt.

S: Ok. Let’s start with the essentials. What is neurogenesis?

N: Neurogenesis is the creation of new neurons.

The brain has stem cells, just like other parts of the body, and new neurons come from neural stem cells. The neural stem cells produce a handful of progenitor cells, which then divide many times to amplify the population of new neuron. These neural stem cells are found in the olfactory bulb (which is responsible for your sense of smell) and the dentate gyrus of the hippocampus (which is involved in learning and memory).

Neurogenesis is interesting to researchers because it has been shown to cause significant increases in learning, memory, and cognitive performance. When we increase neurogenesis in normal rats, they become much better at solving puzzles and remembering things. 1

S: Okay, so we have a continual stream of new neurons that is generated from neural stem cells. What happens to these neurons as they mature and integrate into the existing neural network?

N: As these new neuron cells mature and begin to integrate into the network of surrounding older cells, about 80% of them are “pruned” and die off intentionally. This pruning is how a functional network is established – the neurons that survive are those that receive the most relevant data, and are activated the most times.

You can compare the formation of the brain’s functional network to sculpting a statue out of a block of stone: the network is what is left after carving out a lot of raw material. The remainder of the raw material – neurons, in this case, is what retains meaning. The function of the network is to parse information meaningfully. When neurons receive input, they do small parts of pattern detection, like finding edges. The most salient information is chiseled into the neural network, after lots of other less relevant data has passed through.

So the more neurons you have, the more features you can represent. I’ll give you a simple example. Here are three pictures of a toy dinosaur that are 50, 150, and 250 pixels across. You can see how pictures with more pixels can represent more details of the toy dinosaur.

walter-toy-dinosaur-real lowres walter-toy-dinosaur-real medreswalter-toy-dinosaur-real

S: The dentate gyrus in the hippocampus is one of the most dense cell layers. But new neurons are continually being added to it. How does it allow for new representations to integrate into a network that is already tightly packed?

N: Here’s one nifty thing about neurons:

After they have matured and incorporate into the network (about 4 weeks after progenation), they are more excitable for a short period of time. If there is new data entering the network that isn’t precisely represented by the old neurons, these newer, more excitable neurons are thus more likely to come to represent it. They will be recruited more often than their neighboring cells, and will form the basis for a unique sparse representation of the new data.

Therefore, at the behavioral level, there is a continual unique critical period for the small set of new cells in the hippocampus that have ‘come of age” and are ready to receive their representations from the outside world.

This also explains part of how we have temporal segregation of memory: I can roughly distinguish events that happened yesterday from events that happened two weeks ago, partially because there were new neurons that had ‘come of age’ and were representing the episodic information in different places.

S: What determines which neuron the new data gets sent to in the first place? What is the ‘routing function’ between sensory input and new neurons?

N: Each neuron detects a feature. So the more neurons you have, the higher resolution your network can detect things.

S: How do you test the relationship between neurogenesis on the biological level and “pattern separation” at the behavioral level?

N: One of the most common memory tasks in rodent studies is contextual spatial navigation. Rats are put into environments with slightly different decorations, and need to remember which hallway the food is stored in. As neurogenesis decreases, the rodent has less fine-grain resolution of contexts that they can distinguish.

But there are still unanswered questions. It could be the case that new neurons are increasing the overall plasticity in the network. Or, it could be the case that for any hippocampal task, the “resolution of pattern recognition” or other output is increased by new neurons, but it is hard to establish that.

S: Are these new neurons being generated at a constant rate, or is there something that makes them generate more or less?

N: Two of the main factors that influence neurogenesis are the level of stress in the environment, and the timing of development in the organism. Stressful environments cause a decrease in neurogenesis. Neurogenesis is especially high during childhood and decreases with age.

S: Cool. How does neurogenesis contribute to learning and forming new memories?

N: The most clear mechanism of new neurons is enabling better “pattern separation”. As more neurons are integrated into the functional network, the network can store higher resolution representations of new kinds of data. This pattern separation occurs with highest frequency in the dentate gyrus in the hippocampus.


S: Now for the key question: It seems like increasing the rate at which these new neurons form could increase the resolution of new information that could be represented in the network. More new information could be stored in the same amount of time. Is that true?

In other words, would simply linearly adding neurons increase learning?

N: Yes, but there is likely to be an upper limit. It’s possible that the rate of learning might plateau as you increased neurogenesis past a certain point. The reason is that  the large amount of new neurons might not fully integrate into the functional network. The neurons need to adjust to form a sparse representation and connect with existing neurons, and need time to do that.

It may be possible to overcome this effect by broadening the locations over which the new neurons are added (instead of adding them only to the hippocampus). It may also be possible to implant neural stem cells in other parts of the cortex to increase its growth.


S: So you’re suggesting that current models of how neurons integrate into the hippocampal network predict that there might be an upper limit to the gains realized by adding new neurons to the hippocampus. Are there ways to overcome that upper limit? Or might we realize significant gains before reaching that upper limit, such as to make it worthwhile to pursue?

N: In either case, one outstanding research question is the relationship between cognition and the size and rate of growth of both the hippocampus and cortex. Significant gains could be realized through linear increases in the growth of new neurons. As a very loose proxy, the the human brain is 4.8 times the size for a hypothetical monkey of the same body weight, and the human neocortex is 35% larger than predicted for a primate with as large a brain. Depending on the test, humans can arguably be much more than 4.8 times smarter than a monkey. 2

But that is a good question. The exact point of the upper limit of neurogenesis has never been tested. There are some things you could do if you wanted to research this more:

First, we could model this computationally! This would be an interesting research question: Based on a computational model of the hippocampus, what is the upper limit of the rate of neurogenesis that you can sustain, before it starts to interfere with the representations? What happens when the rate of neurogenesis is 2x, 5x, 20x the normal rate? Do the new neurons interfere with sparse coding or does the system scale? Does the  require higher amounts of input (as could be achieved through sensory extension) in order to scale, or can it sustainably increase the learning rate while the amount of incoming data stays constant?

If you like digging through literature, here is literature research question: You could find experiments that involve increasing neurogenesis in hippocampus, and find the corresponding % increase in performance on memory task. Then plot the increase of neurogenesis on the X-axis, and compare it to the increase in performance on the Y-axis. You would then ask how to extrapolate the line: would it plateau, stay linear, or curve upwards? This is a good computational question, depending on an accurate model of how the hippocampus works.

If you like playing in the lab, here is a biological experiment: What are the upper limits to the current methods for increasing neurogenesis? Is there a way to produce 5x more than has been done before?

Finally, here’s one more question, which I’m not sure how to answer:  if simply tweaking neurogenesis had big payoffs, wouldn’t evolution have already done it? Maybe not, or maybe the in-between steps to higher neurogenesis weren’t viable. It could be that the correct intervention couldn’t be produced through genetic tweaks – artificial interventions are needed. Also, humans haven’t had that much time to evolve, so the possibilities within our genome haven’t been fully explored.

S: Thanks for the feedback, Neurocrates!

N: No prob. May you spawn many new feature-detecting goo balls.



  • It is clear that adding more neurons to the hippocampus increases learning and memory. It is possible that the increase in learning and memory would asymptote after some point.
  • This asymptote is predicted by current models of how new neurons integrate into the new network: neurons form sparse representations, and there is not a neat one-to-one mapping between neurons and concepts.
  • However, there may still be significant gains realized before hitting this upper limit.