Abstract
Background: Human navigation of social interactions relies on the processing of emotion on faces. This meta-analysis aimed to produce an updated brain atlas of emotional face processing from whole-brain studies based on a single emotional face–viewing paradigm (PROSPERO CRD42022251548).
Methods: We conducted a systematic literature search of Embase, MEDLINE and PsycINFO from May 2008 to October 2021. We used seed-based d mapping with permutation of subject images to conduct a quantitative meta-analysis of functional neuroimaging contrasts between emotional (e.g., angry, happy) and neutral faces. We conducted agglomerative hierarchical clustering of meta-analytic map contrasts of emotional faces relative to neutral faces. We investigated lateralization of emotional face processing.
Results: From 5549 studies identified, 55 data sets (1489 healthy participants) met our inclusion criteria. Relative to neutral faces, we found extensive activation clusters by fearful faces in the right inferior temporal gyrus, right fusiform area, left putamen and amygdala, right parahippocampalgyrus and cerebellum; we found smaller activation clusters by angry faces in the right cerebellum and right middle temporal gyrus (MTG) and by disgusted faces in the left MTG. Happy and sad faces did not reach statistical significance. Clustering analyses showed similar activation patterns of fearful and angry faces; activation patterns of happy and sad faces showed the least correlation with other emotional faces. Emotional face processing was predominantly left-lateralized in the amygdala and anterior insula, and right-lateralized in the ventromedial prefrontal cortex.
Limitations: Reliance on discretized effect sizes based on peak coordinate location instead of statistical brain maps, and the varying level of statistical threshold reporting from original studies, could lead to underdetection of smaller clusters of activation.
Conclusion: Processing of emotional faces appeared to be oriented toward identifying threats on faces, from highest (i.e., angry or fearful faces) to lowest level (i.e., happy or sad faces), with a more complex lateralization pattern than previously theorized. Emotional faces may be processed in latent grouping but organized by threat content rather than emotional valence.
Introduction
Human beings are a socially complex species.1 Our ability to navigate social interactions is dependent, in part, on the effective processing of emotions on faces. Such processing allows humans to recognize the affective states of others and enables appropriate cognitive and behavioural adjustment during interpersonal exchanges.2 Emotional face processing follows a slow developmental course, but the ability to detect facial emotion categories is already present in young infants.3,4 Discrete categories of facial emotions are also observed across cultures,5,6 although there are cross-cultural differences in the categorization and interpretation of these emotions.7
These observations have led to several theorizations of how emotion, in general, is processed in the brain. The classical locationist view of emotional perception assumes that there exists a set of discrete and universal emotional categories, and that each emotional category is associated with distinctive neural signatures.5 In contrast, the constructionist view proposes that all emotions are processed by a common underlying brain network that becomes psychologically attributed to a different and discrete range of emotions based on previous experiences.8 Others have attempted to bridge these fundamentally opposing views by suggesting that the processing of emotions occurs in latent groupings; for instance, according to a broad valence polarity where negative emotions (i.e., fear, anger, disgust and sadness) are processed distinctly from positive ones.9
It is unclear which of these theories best describes the brain processing of emotional faces, given that most theorization has been based on meta-analytic findings of brain activation during a variety of emotional processes (e.g., experiencing emotional scenes and perceiving emotions on faces). These potentially different processes may indeed rely on overlapping brain regions such as the limbic system and insular cortices. 8,10,11 However, each emotional face category may also engage specific cortical activation in visual, temporoparietal, prefrontal (including the inferior and orbitofrontal areas) and cerebellar cortices, as has been observed during the viewing of emotional relative to neutral faces.12–14
Among the meta-analyses that have contrasted discrete emotion relative to neutral faces, some findings suggested commonality and specific brain activation across the emotional categories. In our previous meta-analysis involving 105 unique whole-brain and region-of-interest studies, amygdala activation was reported during the processing of happy, fearful and sad faces but not angry or disgusted faces, which selectively activated the insula.12 A recent meta-analysis including 141 studies found that left or bilateral amygdala activation was involved in the processing of happy, sad, angry and fearful, but not disgusted faces.15 Angry faces — which activated the left pallidum and right fusiform face areas (FFA), and the right posterior middle temporal and occipital gyri — and fearful faces — which activated similar areas including the bilateral pallidum, left inferior frontal gyrus (IFG), left FFA and bilateral occipital areas — appeared to activate more brain regions than any other emotions; disgusted faces activated only bilateral occipital face areas.15 However, this meta-analysis did not control for activation related to cognitive functions nonspecific to face processing since it included studies using any paradigms involving emotional faces.
The present meta-analysis sought to contribute to the understanding of how discrete emotional face categories were processed in the brain, relative to neutral faces. Unlike previous approaches,13–15 our meta-analysis focused on a single paradigm involving passive or active viewing of emotional faces and excluded tasks involving additional cognitive function — such as oddball target detection, inhibition or cognitive interference, and mnestic or memorization tasks — to control for nonspecific cognitive processes other than that for emotional faces. To update and extend our previous approach, 12 we included only whole-brain studies to produce an unbiased location of effects. Furthermore, we used a hierarchical clustering analysis of the meta-analytic maps16,17 to explore how closely related the processing of one emotional face category was to another.
Methods
Search strategy
We conducted a comprehensive systematic literature search of the Embase, MEDLINE and PsycINFO databases using the Ovid platform, combining key terms related to emotional faces (i.e., “facial emotion*” or “facial affect*” or “affective face*” or “emotional face*” or [happy or happiness or sad or sadness or fear* or anger or angry or disgust* or neutral] ADJ3 face*) and neuroimaging (i.e., “functional magnetic resonance imaging” or “neuroimaging” or fMRI). We searched literature published from 2008, when the last meta-analysis done in a similar manner on this topic was conducted,12 to Oct. 5, 2021. We conducted a manual reference search through previous meta-analyses to find articles meeting our criteria before 2008. The protocol for this meta-analysis was preregistered in PROSPERO (CRD42022251548). This systematic review and meta-analysis followed the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) 2020.18
Eligibility criteria
Included studies reported functional magnetic resonance imaging (fMRI) data during emotional face viewing with whole-brain coverage, which is unbiased toward specific brain anatomic regions.9,13,15 We included only peer-reviewed empirical research studies involving at least 12 healthy participants to reduce the probability of false positive findings, as applied in previous meta-analyses.9 Included studies also reported 1 of the acceptable fMRI contrasts between emotional faces (i.e., happy, sad, angry, fearful or disgusted) with either a neutral face as primary interest, or a baseline fixation cross as secondary interest in this study. Following our previous meta-analytic approach,12 we included only studies using a single paradigm assessing emotional processing during the passive or active viewing of emotional human faces (i.e., excluding studies that used tasks involving additional cognitive functions such as oddball target detection, inhibition or cognitive interference, and mnestic tasks or memorization). We also excluded studies if there were no brain activation peak coordinates or if their samples overlapped with other publications, in which case we contacted the study authors to help decide which study sample should be included in the meta-analysis. Finally, we excluded reviews, meta-analyses and non-peer-reviewed publications.
Study selection
The study selection was assisted by the software EndNote 20, starting with a semiautomated removal of duplicates, conference or dissertation abstracts and foreign-language publications before screening. We then screened studies in 2 stages. Two independent screeners (L.F. and S.L.) first screened by title and abstract. Three pairs of independent screeners (L.F. and V.O., F.G. and S.L., and E.T. and K.W.) then completed full-text screening. In both stages, screeners discussed rating discrepancies to reach a consensus; if necessary, a third researcher (P.F.P. and S.D.) helped to reach consensus.
Data extraction
Data extracted from the studies included sample sizes and characteristics (i.e., mean age, handedness and percentage of female participants); MRI parameters (i.e., field strength, time repetition, sequence duration, slice thickness, interslice gap, whether a whole-brain scan took place); task parameters (i.e., trial numbers or lengths, interval duration between trials and task design [block v. event-related]); neuroimaging analysis information such as preprocessing software (e.g., FSL, SPM), the use of slice timing correction, motion correction and stereotactic space (e.g., MNI, Brett’s mni2tal); and registration methods to the stereotactic space (e.g., nonlinear based on T1), high-pass filtering, smoothing, covariates at first level, numbers and reasons for rejected scans, group-level statistics and the statistical threshold. Finally, for each contrast, we extracted the coordinates and effect size statistics (e.g., t or z scores) from the peaks of clusters of statistically significant voxels or statistical parametric maps, where available. Pairs of researchers (L.F. and V.O., F.G. and S.L., and E.T. and K.W.) extracted data. They discussed disagreements to reach a consensus, and discussed with a third researcher (P.F.P. or S.D.), if necessary.
Quality assessment
We evaluated each study using a quality assessment tool adapted from a reporting checklist for fMRI studies.19 Our checklist contained 8 items (Appendix 1, available at https://www.jpn.ca/lookup/doi/10.1503/jpn.230065/tab-related-content) — each given a score of 1 (i.e., clear reporting), 0.5 (i.e., possible reporting bias) or 0 (i.e., evidence for reporting bias) — with a maximum score of 8. A total score of 6.5 or higher was given a quality rating of good, 3.5–6.0 was fair and 3.0 or lower was poor. Pairs of researchers completed assessments independently (L.F. and V.O., F.G. and S.L., and E.T. and K.W.); they discussed their rating disagreements to reach a consensus, and discussed with a third researcher (P.F.P. or S.D.), if necessary. We excluded studies with poor ratings from the meta-analysis.
Neuroimaging data synthesis
We took a meta-analytic data synthesis approach using the seed-based d mapping with permutation of subject images (SDM-PSI, www.sdmproject.com). The SDM-PSI enables the synthesis of discrete peaks and effect sizes of clusters of brain activation and continuous statistical parametric maps. Briefly, the SDM-PSI creates a map of voxel-wise Hedges g for each study and then applies a meta-analytic random-effects model. The tool allows studies with incomplete data (e.g., only reporting coordinate but not effect size) to be included in the meta-analysis, by using multiple imputation to impute Hedges g maps, which are subsequently combined using Rubin’s rules. The present SDM-PSI version applies a threshold-free cluster enhancement correction and participant-based permutation tests for multiple comparisons.
We conducted the main meta-analyses for the contrasts between each category of emotion with neutral faces alone, considered an ideal index for tapping emotional processing. Given the frequent use of a fixation cross as a baseline condition in studies of emotional face processing, we also conducted meta-analyses for contrasts between each emotion with neutral face or fixation cross in combination. These analyses were investigated primarily using a p value corrected for family-wise error (FWE) rate of less than 0.05, but we also indicated clusters surviving pFWE < 0.01. Furthermore, we explored clusters at an uncorrected threshold of p < 0.005 if no clusters survived the FWE correction to provide readers with a range of statistical significance.
To explore the processing closeness among different emotional categories, we conducted agglomerative hierarchical clustering of the meta-analytic maps contrasting a given emotion with neutral faces. First, we calculated pairwise Pearson correlations (r) between unthresholded effect size maps across all voxels within the SDM mask, which has been shown to best capture the image dissimilarity among SDM meta-analyses.17 We subsequently calculated the dissimilarity matrix (1−r values) and applied agglomerative hierarchical clustering in R using the average linkage method.17 We used bootstrapping to assess the stability of these clusters. Specifically, we used the pvclust package for R, which resampled the voxels 1000 times, conducted the cluster analysis for each resample and counted how many of these resamples showed the original clusters.20
We conducted further exploratory analyses when sufficient data were available. First, we conducted a meta-analysis combining all negative emotions (i.e., angry, fearful, disgust, sad9), in contrast with neutral faces. We also conducted a meta-analysis combining only threatening faces (i.e., angry, fearful) against neutral faces. We conducted comparative meta-analyses pairwise between each emotion category, which was contrasted with neutral faces, for instance, between angry (v. neutral) and happy (v. neutral) faces.
As an additional investigation after previous meta-analyses, 15,21 we investigated the consistency of hemispheric lateralization of emotion processing regions across neuroimaging studies. Lateralization was indicated by a laterality index, computed for each emotion using the method outlined by Xu and colleagues.15 We sought to explore the replication of findings and extracted the average Hedges g values from the right and left amygdala, anterior insula and ventral medial prefrontal cortex (vmPFC), corresponding to past meta-analyses.15,21 Extracted regions followed the automated anatomic labelling template.22
Finally, within each emotional category, we conducted sensitivity meta-analyses with studies involving only adult participants, and with those involving only implicit tasks. In addition, we used meta-regressions to evaluate the association between sex or age with emotional face processing.
Results
Study selection
The initial search retrieved 5549 studies. After duplicate records, conference and dissertation abstracts and foreign language publications were removed, 1823 studies underwent a title and abstract screening, of which 519 were submitted to a full-text screening. From this subset, we excluded 477 studies, primarily in the absence of appropriate neuroimaging contrasts (n = 158) or a whole-brain analysis (n = 121), leaving 53 studies (i.e., 55 unique data sets) to be included in the meta-analysis (Figure 1 and Table 1).
Meta-analyses of emotional face processing
Table 2 shows the peak coordinates and effect sizes (Hedges g) of the activation clusters associated with each emotion relative to neutral faces alone (Figure 2A), or in combination with a fixation cross (i.e., baseline condition). Findings are reported at the threshold of significance pFWE < 0.05, which was relaxed to an uncorrected p < 0.005 when further exploration was warranted. Table 2 also indicates regions that survived the conservative threshold of pFWE < 0.01. Unless otherwise stated, all findings showed no significant heterogeneity or publication bias.
Angry
The contrast of angry relative to neutral faces (n = 21) was associated with activation in the right cerebellum and FFA and the right middle temporal gyrus (MTG) at pFWE < 0.05 (Table 2 and Figure 2A). Relative to baseline (n = 26), angry faces were associated with activation in the bilateral cerebellum and FFA, left IFG, right MTG, right inferior occipital gyrus (IOG) and left amygdala at pFWE < 0.05 (Table 2). The right IOG cluster showed a significant publication bias (p = 0.046).
Fearful
The contrast of fearful relative to neutral faces (n = 27) evoked activation in the right inferior temporal gyrus (ITG), FFA and cerebellum; the left putamen, hippocampus and amygdala; and the right parahippocampal gyrus (PHG) and amygdala at pFWE < 0.05 (Table 2 and Figure 2A). Relative to baseline (n = 33), fearful faces were associated with activation in the bilateral cerebellum, FFA, amygdala and supplementary motor area (SMA), and the left inferior parietal gyrus (IPG) and left thalamus at pFWE < 0.05 (Table 2).
Disgusted
Relative to neutral faces, disgusted faces (n = 8) were associated with increased left MOG activation at pFWE < 0.05 (Table 2 and Figure 2A). Compared with baseline (n = 10), disgusted faces were associated with increased activation in the left SMA and left superior frontal gyrus (SFG) at pFWE < 0.05 (Table 2). The cluster of activation in the SMA had high heterogeneity (I2 > 50%) across studies, and a significant publication bias (p < 0.001).
Happy
The contrast of happy relative to neutral faces (n = 20) evoked no significant activation at pFWE < 0.05 (Table 2 and Figure 2A), but further exploration using an uncorrected p threshold less than 0.005 revealed an activation in the left MOG and right FFA (Table 2). The contrast of happy relative to baseline (n = 38) elicited activation in the bilateral MOG at pFWE < 0.05.
Sad
The contrast of sad versus neutral faces (n = 9) showed no activation at either pFWE < 0.05 or an uncorrected p < 0.005 (Table 2). Sad faces compared with baseline contrast (n = 10) showed no activation at pFWE < 0.05, but further exploration revealed higher activation in the left insula at uncorrected p < 0.005 (Table 2).
Neutral face v. fixation cross
Neutral faces, relative to a fixation cross, evoked activation in the right SMA, the right cerebellum and putamen, the left postcentral gyrus, the right IFG and the left hippocampus at pFWE < 0.05.
Hierarchical clustering analyses
Pairwise correlations between meta-analytic maps across emotional categories (Figure 2B) were in the lower moderate range (r = 0.3 to 0.4), except between angry and fearful faces, which was a higher moderate correlation (r = 0.47), and between sad and any other emotional faces, which was in the low range (r = −0.1 to 0.2). Thus, the dendrogram first merged for the processing of angry and fearful faces (cophenetic distance d = 0.53), which were subsequently merged with the processing of disgusted and happy faces (d = 0.64 to 0.66). Processing of sad faces was not merged until the top of the dendrogram (d = 0.93). The activation clusters of angry and fearful faces were differentiated from those of the disgusted faces with 100% probabilities. The activation clusters of disgusted faces were differentiated from those of happy faces with 73% probabilities. Finally, the activation clusters of happy faces were differentiated from those of sad faces with 100% probabilities.
Meta-analyses of emotion groups
Negative emotional faces
The contrast of negative emotion relative to neutral faces (n = 40) evoked increased activation in the bilateral cerebellum and FFA, the left IFG and SFG, the right putamen and amygdala and the left MTG (Figure 2C and Appendix 1, Table S2). The contrast of negative emotion relative to baseline (n = 51) was associated with increased activation in the bilateral cerebellum and FFA, the right precentral gyrus and amygdala, the left IFG, the left SFG, the right MTG, the left postcentral gyrus and the right median cingulate and paracingulate gyrus.
Threatening emotional faces
The contrast of threatening relative to neutral faces (n = 35) evoked increased activation in the bilateral cerebellum and FFA, the left IFG, the bilateral putamen and amygdala, the right MTG and the right precentral gyrus at pFWE < 0.05 (Figure 2C and Appendix 1, Table S3). The contrast of threatening faces relative to baseline (n = 46) evoked increased activation in the bilateral cerebellum and FFA, the right precentral gyrus and amygdala, the left IFG, the left SFG, the right MTG, the left postcentral gyrus and the right median cingulate and paracingulate gyrus at pFWE < 0.05.
Exploratory pairwise comparisons between emotional contrasts
Negative emotions v. happy faces
Relative to happy faces, angry faces were associated with activation in the right MTG and left IFG, disgusted faces were associated with activation in the left IFG and right putamen, and fearful faces were associated with activation in the right ITG, right precentral gyrus and right putamen and insula, whereas sad faces were associated with deactivation in the bilateral MOG and right FFA, all at pFWE < 0.05 (Appendix 1, Figure S1A and Table S2).
Pairwise comparisons among negative emotional faces
In pairwise comparisons between the negative emotions versus neutral face contrasts (Appendix 1, Figure S1B and Table S3), we found that angry faces showed increased activation in the right MTG, compared with disgusted or fearful faces, at pFWE < 0.05. Compared with sad faces, angry faces also showed increased activation in the bilateral IOG, right FFA, right STG, left MFG, left lingual gyrus and right MTG at pFWE < 0.05. Disgusted faces showed decreased activation in the right ITG compared with fearful faces and showed increased activation in the SMA, left MOG, right putamen and left IFG compared with sad faces. Finally, fearful faces, relative to sad faces, were associated with increased activation in the bilateral FFA, PHG and amygdala, the right precentral gyrus and the left IOG at pFWE < 0.05.
Hemispheric lateralization of activation
Amygdala activation was left-lateralized during the processing of all emotion categories. Activation of the anterior insula also showed left hemispheric lateralization during the viewing of all emotions except disgusted faces, which showed no lateralization, and happy faces, which showed right lateralization. Meanwhile, the vmPFC showed a right dominance for processing all negative emotions but left hemispheric lateralization in response to happy faces (Figure 3 and Appendix 1, Table S4).
Other exploratory meta-analyses
No significant brain activation was found in sensitivity meta-analyses involving subgroups of adult participants or implicit tasks within each emotional category. No significant association was observed between age or sex and each category of emotional face processing.
Quality assessment ratings
Among the included studies, 31 (58.5%) of them were rated good while the remaining were fair. No studies were rated poor (Appendix 1, Figure S2). Therefore, we included all studies in the meta-analysis.
Discussion
Theories of emotional face processing have relied on findings from studies involving the processing of broad emotional stimuli and tasks that might be influenced by other cognitive functions. This meta-analysis focused on studies that compared emotional and neutral faces using a single passive or active emotional face viewing paradigm. Our primary findings showed extensive activation clusters by fearful faces relative to neutral faces in the right ITG, right FFA, left putamen and amygdala, right PHG and cerebellum. Fewer activation clusters were evoked by angry faces in the right cerebellum and right MTG, and by disgusted faces in the left MTG, relative to neutral faces. Happy and sad faces did not evoke activation beyond the main threshold of significance, relative to neutral expressions. However, an exploration using a less conservative uncorrected threshold (p < 0.005) showed that happy faces evoked activation in the left MOG and right FFA. Fearful and angry faces appeared to have the highest meta-analytic map correlation and shortest cophenetic proximity, while happy and sad faces had the lowest correlation to and highest distance from any other emotional faces.
Different emotions primarily evoked discrete clusters of activation. Among the different emotional face categories, fearful faces appear uniquely associated with amygdala activation, which corresponds to the idea that the amygdala is more sensitive toward, and is more strongly activated by, fearful faces than any other emotion.12,76,77 We also found that fearful faces evoked the most extensive clusters of activation in the brain, which is similar to findings from a recent meta-analysis of whole-brain studies of emotional faces relative to neutral faces.15 Such wide-ranging activation may reflect a heightened vigilance and arousal supporting the readiness for flight-or-fight responses.78,79
Notable findings are the overlaps of several activation clusters (i.e., cerebellum and closely neighbouring posterior–temporal regions) during the processing of fearful and angry faces, and the closest proximity of these 2 emotions in the novel hierarchical clustering analysis. Furthermore, the cerebellum, neighbouring posterior–temporal regions and amygdala were all activated when we investigated the processing of fearful and angry faces in combination, compared with neutral faces. This functional similarity is presumably related to the threats conveyed by both emotional faces, whether inferred from a third-person perspective in the case of fearful faces, or directly experienced by the observer of an angry face. The cerebellum plays an important role in fear learning and mediating motor response to threats,80,81 and its stimulation enhances the perception of threatening faces.82 The activation of posterior–temporal cortices may be related to the engagement of the theory of mind network to assess the intention of a potential perpetrator.17,83
Disgusted faces elicited activation in the left MOG, partly replicating previous findings in the bilateral MOG.15 However, unlike previous findings of activation in the left amygdala by sad and happy faces,15 we did not observe a consistent brain response to these facial emotions in the present study. These discrepant findings may be related to the specificity of the eligible task in the present study, and may also indicate that these effects were weaker in magnitude, such that they were undetectable with the inclusion of fewer studies in this meta-analysis.
The meta-analytic maps of emotional categories, which show imperfect correlation of weak to moderate–strong magnitude, point toward an underlying general neural response during emotional face processing, as theorized by the constructionist model,8 although the spatial response patterns seem specific to the emotion. In the context of the theories of emotional face processing, however, our most interesting finding was the lack of consistent amygdala activation during the processing of emotional faces other than fearful faces, and the hierarchy of cophenetic proximities that led to early agglomerative clustering of threat-related activation by fearful and angry faces. These findings show that the human brain orients its response to emotional faces based on the absence or the presence of threats, the latter of which receives processing priority. This may explain why happy and sad faces — which convey the absence of a potential threat and, for sad faces, the presence of vulnerability — evoke less response and show a weaker meta-analytic map correlation in relation to fearful and angry faces.
Negative emotional faces (i.e., fearful, angry, disgusted, sad), relative to neutral faces, were associated with activation in the right amygdala, temporal and fusiform cortices, corresponding to previous meta-analytic findings,9,15 which extended to the cerebellum — presumably driven by responses to the angry and fearful faces — and to the left inferior and superior frontal cortices. The additional comparison of threatening (i.e., angry, fearful) faces, relative to neutral faces, that we conducted following the hierarchical clustering analyses, found evoked activation in highly overlapping areas in the bilateral putamen and amygdala, the cerebellum and FFA, the left IFG, the right MTG and the right precentral gyrus, indicating the predominant contribution of threatening faces in activation clusters observed in the contrast between negative emotion and neutral faces. Furthermore, exploratory pairwise meta-analytic comparisons among facial emotion categories (Appendix 1, Figure S1) elucidated the activation clustering elicited by these 5 emotion categories, where angry or fearful faces evoked the most pervasive pattern of activation when compared with happy and particularly sad faces, which are the furthest in the cophenetic distance. Overall, our findings support the view that emotions are processed in a latent grouping as previously suggested,9 but this grouping is based on threat content rather than valence.
We investigated laterality effects in the amygdala, anterior insula and vmPFC, following previous studies.15,21 There are 2 classic hypotheses of brain lateralization of emotional processing. The right-hemisphere dominance theory proposes that all emotions are predominantly processed in the right brain hemisphere,21,84 while the valence lateralization hypothesis proposes a left hemispheric specialization in processing positive and approach-related emotions, and a right-lateralized processing of negative and withdrawal emotion, particularly in the frontal area.85,86
Our findings provide little support for an absolute right-hemisphere dominance of emotional face processing.21,84 Evoked activation across all emotional categories was exclusively left-lateralized in the amygdala or predominantly left-lateralized in the anterior insula. The exclusive left-lateralized processing of emotional faces in the amygdala is in line with conclusions of some reviews and meta-analyses15,87 but not of others, including our previous meta-analytic attempt.12,21,84,88,89 The different findings may be related to the exclusion of studies involving subliminal or subconscious perception of unattended emotional faces, which are thought to be associated with predominantly right amygdala activation.88,89 The anterior insula activation, which was predominantly left-lateralized during perception of nearly all emotional faces except happy faces, only partly replicates recent findings.15 Specifically, the right-lateralized anterior insular response to happy faces contradicts a previous meta-analytic finding that has shown left-lateralized insula activation by positive emotions, 90 although the previous meta-analysis included studies involving emotional experience, as well as emotional face perception, which may have confounded the laterality of findings. Interestingly, lesion studies have shown an association between left hemispheric insula lesions with emotional face recognition,91,92 and an association between right anterior insula lesions with happy and angry face recognition,93 although the emotional or regional specificity of these associations were not always examined. Finally, activation in the vmPFC is predominantly right-lateralized for all emotion categories, which also partly replicates recent findings,15 except for the processing of happy faces, which is left-lateralized. The frontal pattern of asymmetry in this region is thus in line with the valence hypothesis.85 Overall, our findings suggest a general pattern of left-lateralized activation in the amygdala and insula, and right-lateralized activation in the vmPFC. The variation to this general pattern should be taken with caution, given the relatively few studies included in the meta-analyses. Taken together, our findings show that the processing of emotional faces is more complex than previously existing theories have proposed.
Finally, several studies used a fixation cross as a control condition for emotional faces when investigating emotional face processing. Our meta-analysis shows that neutral faces evoke non-negligible activation in the prefrontal, subcortical and cerebellar areas when compared with a fixation cross. Furthermore, including studies with a fixation cross as a control condition leads to the presence of more activation clusters for each emotion compared with neutral faces alone, presumably because of the recruitment of regions associated with processing of facial features unrelated to emotion. Our findings demonstrate the need to control for processes related to general face perception when studying the processing of each emotional category of faces.
Limitations
As with any other imaging meta-analyses, the reliance on discretized effect sizes based on peak coordinate location instead of statistical brain maps, and the varying level of statistical threshold reporting from original studies, could lead to underdetection of smaller clusters of activation. At an uncorrected significance threshold, the activation in the left MOG and right FFA during the viewing of happy faces relative to neutral faces should be considered a preliminary finding and will require confirmation from future meta-analyses. The decision to limit our scope to a single paradigm involving the viewing of emotional faces in this meta-analysis resulted in the inclusion of relatively few studies, which further constrained the power to detect smaller effects. This may explain why we found no influences of age or sex in emotional processing activation in the meta-regression analyses. However, this choice of method also enhances the specificity of our findings to brain activation related to emotional face processing only, controlling for nonspecific cognitive processes beyond passive or active emotional face viewing.
Conclusion
This meta-analysis is among the few that specifically investigated the processing of discrete emotional face categories relative to neutral faces, but focused on studies involving only the viewing of emotional faces. The primary findings suggest that the processing of emotional faces in the human brain is oriented to prioritize the identification of threats (i.e., fearful and angry faces) over nonthreatening emotional categories (i.e., happy and sad faces), with a more complex lateralization pattern than existing theories have proposed. This appears to support the view that emotional faces are processed in latent groupings, by threat content rather than valence, which provides a novel way for theorizing how emotional faces are processed in the human brain.
Footnotes
Competing interests: Paolo Fusar-Poli has received research funds or personal fees from Lundbeck, Angelini, Menarini, Sunovion, Boehringer Ingelheim, Proxymm Science, outside the current study. No other competing interests were declared.
Contributors: Steve Lukito and Lydia Fortea, Joaquim Radua and Paolo Fusar-Poli contributed substantially to study conception and design. Steve Lukito, Lydia Fortea, Federica Groppi, Ksenia Zuzanna Wykret, Eleonora Tosi, Vincenzo Oliva and Stefano Damiani contributed to data acquisition. Steve Lukito, Lydia Fortea, Vincenzo Oliva and Joaquim Radua analyzed data. Steve Lukito, Lydia Fortea, Stefano Damiani, Joaquim Radua and Paolo Fusar-Poli interpreted the data. Steve Lukito and Lydia Fortea drafted the article. All of the authors critically revised it for important intellectual content and gave final approval of the version to be published.
Funding: This study was supported by funding from Wellcome Trust grant to Paolo Fusar-Poli (Early detection of mental disorders, ENTER [215793/Z/19/Z]) and the United Kingdom Department of Health via the National Institute for Health Research (NIHR) Biomedical Research Centre for Mental Health at South London and the Maudsley National Health Service (NHS) Foundation Trust and the Institute of Psychiatry, Psychology & Neuroscience, King’s College London. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health. The funders had no influence on the design, collection, analysis and interpretation of the data, writing of the report or decision to submit this article for publication. In accordance to Wellcome Trust’s policy on data, software and materials management and sharing, data supporting this study will be openly available from the supplement materials and from a publicly open repository, osf.io/gmv9n/.
- Received May 3, 2023.
- Revision received July 7, 2023.
- Accepted August 1, 2023.
This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY-NC-ND 4.0) licence, which permits use, distribution and reproduction in any medium, provided that the original publication is properly cited, the use is noncommercial (i.e., research or educational use), and no modifications or adaptations are made. See: https://creativecommons.org/licenses/by-nc-nd/4.0/