Why can’t I remember? Model may show how recall can fail

Serdar Acar / EyeEm

Physicists can create serious mathematical models of stuff that is very far from physics—stuff like biology or the human brain. These models are hilarious, but I’m still a sucker for them because of the hope they provide: maybe a simple mathematical model can explain the sexual choices of the disinterested panda? (And, yes, I know there is an XKCD about this very topic). So a bunch of physicists who claimed to have found a fundamental law of memory recall was catnip to me.

To get an idea of how interesting their work is, it helps to understand the unwritten rules of “simple models for biology.” First, the model should be general enough that the predictions are vague and unsatisfying. Second, if you must compare with experimental data, do it on a logarithmic scale so that huge differences between theory and experiment at least look tiny. Third, if possible, make the mathematical model so abstract that it loses all connection to the actual biology.

By breaking all of these rules, a group of physicists has come up with a model for recall that seems to work. The model is based on a concrete idea of how recall works, and, with pretty much no fine-tuning whatsoever, it provides a pretty good prediction for how well people will recall items from a list.

Put your model on the catwalk

It’s widely accepted that memories are encoded in networks of neurons. We know that humans have a remarkable capacity to remember events, words, people, and many other things. Yet some aspects of recall are terrible. I’ve been known to blank on the names of people I’ve known for a decade or more.

But even simpler challenges fail. Given a list of words, for instance, most people will not recall the entire list. In fact, a remarkable thing happens. Most people will start by recalling words from the list. At some point, they will loop back and recall a word they’ve already said. Every time this happens, there is a chance that it will trigger another new word; alternately, the loop could start to cycle over other words already recalled. The more times a person loops back, the higher the chance that no new words will be recalled.

Based on these observations, the researchers created a model based on similarity. Each memory is stored in a different but overlapping network of neurons. Recall jumps from a starting point to the next item that has the greatest network overlap with the previous item. The process of recall suppresses the jump back to the item that had just been recalled previously, which would have the most overlap.

By using those simple rules, recall follows a trajectory that loops back on itself at some random interval. However, if recall were completely deterministic, the first loop back to a word that was already recalled would result in an endless repetition of the same few items. To prevent this, the model is probabilistic, not deterministic: there is always a chance of jumping to a new word and breaking out of a loop.

Boiling all this down, the researchers show that, given a list of items of a known length, the model predicts the average number of items that can be recalled. There is no fine-tuning here at all: if you take the model above and explore the consequences, you get a fixed relationship between list length and number of items recalled. That’s pretty amazing. But is it true?

Experiments are messy

At first sight, some experiments immediately contradict the researcher’s model. For instance, if the subject has a longer period of time to look at each word on the list, they will recall more words. Likewise, age and many other details influence recall.

But the researchers point out that their model assumes that every word in the list is stored in memory. In reality, people are distracted. They may miss words entirely or simply not store the words they see. That means that the model will always overestimate the number of words that can be recalled.

To account for this, the researchers performed a second set of experiments: recognition tests. Some subjects did a standard recall test. They were shown a list of words sequentially and asked to recall as many words as possible. Other subjects were shown a list of words sequentially, then shown words in random order and asked to choose which words were on the list.

The researchers then used their measured recognition data to set the total number of words memorized. With this limit, the agreement between their theoretical calculations and experiments is remarkable. The data seems to be independent of all parameters other than the length of the list, just as the model predicts.

The result also seems to tell us that the variation in experimental data observed in previous experiments is not in recall but in memorization.

A delicate point

So what does the model tell us? It may provide some insight into the actual mechanisms of recall. It may also point to how we can construct and predict the behavior of neural-network-based memories. But (and maybe this is my failure of imagination) I cannot see how you would actually use the model beyond what it already tells us.

Physical Review Letters, 2020, DOI: 10.1103/PhysRevLett.124.018101 (About DOIs)