Translated by Rumia Bose
You walk down a street and see someone approaching; you look at their face and in a fraction of a second you know who it is. It does not really matter whether you see them in profile or full face. And one can do this with thousands of faces: a clever trick of our brains. How do we recognise faces?
The Jennifer Aniston cell?
A few years ago, there was a great commotion: neuroscientist Quiroga had found a neuron in the human brain that only fired when the owner of that brain saw a picture of Jennifer Aniston1How can you see neurons firing in the brain of a patient? These particular patients with epilepsy were undergoing a brain operation where temporary electrodes were placed in the brain, so that you could observe the firing of the neurons. In other patients they found neurons that only fired when the patient saw photos of Bill Clinton, Mother Teresa or Saddam Hossein. All this caused less of a fuss, though I don’t know why. Might there be a specific neuron in your brain for the recognition of each of these people? And how does such a neuron work? Is it just like using a pack of cards: you draw the card “Jennifer Aniston” or the card “Bill Clinton” or the card “ Mother Teresa”? Let me spare you that illusion: there is no special neuron for a particular person or the recognition of that person.
Facial recognition in the temporal lobe
For years we have known that certain parts of the cortex of the brain – so-called face patches – play a role in the identification of faces both by humans as in certain species of monkeys, such as macaques. Doris Tsao and her colleague have now worked out how this takes place2See reference Chang, L. and D. Y. Tsao (2017) Two sorts of characteristics play an important role in this. One sort has to do with facial features, such as distance between the eyes, forehead height, form of the chin. The other includes characteristics such as skin and hair colour and other identifiers which do not concern the physical form of the face. These two sorts of characteristics are analysed in different patches. How these characteristics are analysed is an extremely complex process, and very difficult to conceptualise.
Seeing colours or identifying faces
Perhaps it will make things clearer if I start with something easier. We can visually distinguish between a wide range of colours and discern subtle nuances. For this we have only three sorts of “cones” in our eyes which register light from different wavelengths- red, green and blue. The signals arising from these three sorts of cones are combined in our brain. Red, green and blue are as it were three dimensions of light on x, y and z axes. Each shade of colour is a point on the three-dimensional space, the cube in the figure.
Chang and Tsao used fifty dimensions of facial features. Each face can be considered as a separate point in the 50-dimensional space, that is with 50 axes instead of three. And that is somewhat different to a colour palette with three dimensions of colour. One of the problems that arises is that we cannot mentally conceptualise a fifty-dimensional space. What’s more, Chang and Tsao’s research is still a simplified model of the actual situation. That was done with facial photos. Those faces were static, had a neutral expression, there were no other faces or objects visible, and the faces were clearly visible in good light. In other words, to discern faces in the real world our brains have to measure many more variables and process many more dimensions.
No Jennifer Aniston cell?
Measuring 50 facial features in fifty dimensions is an efficient way to distinguish faces from each other. Each neuron that is involved in facial recognition measures any one dimension. Characteristics on other dimensions have no influence on the activity of that neuron. And this immediately explains why one particular neuron can fire when one looks at Jennifer Aniston and not when seeing someone else. Jennifer Aniston has for example a sharp chin with a typical form. If the neuron of which the researchers are measuring the activity of is sensitive to this feature, then it will fire when one sees (an image of) Jennifer Aniston. If none of the other people shown have such a chin, then the neuron will only fire when shown Jennifer Aniston. And that becomes even clearer, because Chang and Tsao found that neurons are sensitive to complexer facial features than the form of a chin. They describe by way of example that the first of the fifty dimensions is a combination of the form of the hairline, width of the face and eye placement.
Reconstructing a face from firing patterns
Facial features are analysed and distinguished in the aforementioned patches in the temporal lobe. Chang and Tsao demonstrate which role each of 200 neurons play and how they interact to discern the distinguishing features of 200 different faces. What is special is that they hereby – for the first time- establish a direct and detailed relation between such a complex cognitive function as the discerning of faces, and the neuronal firing patterns responsible for this. They found unique firing patterns for each face. But they could also reverse the process. They showed a picture of a face to a primate, measured the activity of 200 neurons, and on the basis of this, reconstructed the face. The figure shows four examples of how well that worked.
A giant step forward?
In conclusion, in general I prefer not to confine my discussion to a single scientific publication. Scientists after all do have a saying: one study is no study at all. Which means: other experimenters will have to reproduce it and check if the principles are sound. This is especially true for completely new principles. But I do believe that this particular study has a sound basis, and is a giant step forward. It might even be the discovery of a general working principle of the brain. Only time will tell if I have let my enthusiasm get ahead of me.
Chang, L. and D. Y. Tsao (2017). “The Code for Facial Identity in the Primate Brain.” Cell 169(6): 1013-1028 e1014.
Tsao, D. (2014). “The Macaque Face Patch System: A Window into Object Representation.” Cold Spring Harbor symposia on quantitative biology 79: 109-114.
Quiroga, R. Q., I. Fried, et al. (2013). “Brain cells for grandmother.” Scientific American 308(2): 30-35.
Freiwald, W. A. and D. Tsao (2010). “Functional Compartmentalization and Viewpoint Generalization Within the Macaque Face-Processing System.” Science 330(6005): 845-851.
Kanwisher, N. (2010). “Functional specificity in the human brain: A window into the functional architecture of the mind.” Proceedings of the National Academy of Sciences 107(25): 11163-11170.
Kanwisher, N. and G. Yovel (2006). “The fusiform face area: a cortical region specialized for the perception of faces.” Philosophical transactions of the Royal Society of London. Series B, Biological sciences 361(1476): 2109-2128.
Quiroga, R. Q., L. Reddy, et al. (2005). “Invariant visual representation by single neurons in the human brain.” Nature 435(7045): 1102-1107.