Article Text

Download PDFPDF

Recognition of emotion from moving facial and prosodic stimuli in depressed patients
  1. Y Kan1,
  2. M Mimura2,
  3. K Kamijima2,
  4. M Kawamura3,4
  1. 1Shinshu University School of Medicine, Matsumoto, Japan
  2. 2Department of Neuropsychiatry, Showa University School of Medicine, Tokyo, Japan
  3. 3Department of Neurology, Showa University School of Medicine
  4. 4CREST, Japan Science and Technology Corporation, Kawaguchi-shi, Japan
  1. Correspondence to:
 Dr Mitsuru Kawamura
 Department of Neurology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo 142-8666, Japan; kawamed.showa-u.ac.jp

Abstract

Background: It has been suggested that depressed patients have a “negative bias” in recognising other people’s emotions; however, the detailed structure of this negative bias is not fully understood.

Objectives: To examine the ability of depressed patients to recognise emotion, using moving facial and prosodic expressions of emotion.

Methods: 16 depressed patients and 20 matched (non-depressed) controls selected one basic emotion (happiness, sadness, anger, fear, surprise, or disgust) that best described the emotional state represented by moving face and prosody.

Results: There was no significant difference between depressed patients and controls in their recognition of facial expressions of emotion. However, the depressed patients were impaired relative to controls in their recognition of surprise from prosodic emotions, judging it to be more negative.

Conclusions: We suggest that depressed patients tend to interpret neutral emotions, such as surprise, as negative. Considering that the deficit was seen only for prosodic emotive stimuli, it would appear that stimulus clarity influences the recognition of emotion. These findings provide valuable information on how depressed patients behave in complicated emotional and social situations.

  • HDRS, Hamilton depression rating scale
  • MMSE, mini-mental state examination
  • SDS, Zung self rating depression scale
  • facial emotion recognition
  • mood disorder
  • negative cognitive process

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Recognition of facial emotion is an important aspect of interpersonal communication. Depressed patients are thought to have a negative cognitive bias in their appraisal of people or life events,1 and to be unable to carry out normal interpersonal interactions.2 Negative cognitive processing in depressed patients has been investigated by means of a paradigm in which other people’s emotions are judged. Several studies have reported that depressed patients are impaired in their recognition of emotions conveyed by facial expressions and pictures,3–5 and that the deficits play a role in the persistence of depression.6,7

However, because previous studies are consistent as to whether recognition of emotion by depressed patients is affected, but not with respect to how cognitive emotion processing is affected, the following important problems about the negative cognitive process remain. First, it is not clear whether depressed patients are more sensitive to negative emotion or less sensitive to positive emotion. Hypersensitivity in judging negative emotional stimuli, especially sadness, has often been reported,7,8 as has lower estimation of positive stimuli.5,7 Second, it is not clear whether the altered ability of depressed patients to assess emotion applies equally to all categories of emotion or to specific emotions. Asthana et al9 did not detect category specific deficits in emotion recognition and concluded that depressed patients’ inability to recognise emotion appropriately reflects a general perceptual impairment. Third, the emotional stimuli used in previous studies of facial emotion recognition were confined to line drawings or photographs of faces. Facial expressions on such static faces might not be as readily recognisable as actual expressions, and might not constitute an appropriate paradigm by which to study facial emotion recognition, because drawings and photographs contain no dynamic information. In addition, as few studies have used non-visual stimuli such as voices, potential differences in emotion recognition in different stimulus modes have not been investigated fully.

In this study, we used two types of emotional stimuli—videotaped facial expressions (that is, a visual stimulus) and prosodic stimuli (a non-visual stimulus)—to investigate negative cognitive processing of emotion in depressed patients. We paid attention to the issue of differences in sensitivity to positive and negative emotions, and to specificity among emotion categories (six basic emotions) and modes (visual and auditory stimuli).

METHODS

Subjects

We tested 16 depressed inpatients (nine men and seven women, mean (SD) age, 50.9 (12.3) years). At the beginning of the study, a staff psychiatrist interviewed prospective subjects, and those who met the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) criteria10 for major depressive episodes were included. Patients were excluded from the study if they had a history of drug dependence or other major psychiatric illness.

Table 1 shows the background data for the patients. All but one were taking antidepressant drug treatment. The doses are given in imipramine equivalents, and the mean (SD) dose was 132 (65) mg/day. The patients had either single or recurrent type symptoms, and the mean duration after onset of the disease (shown as duration 1) was 60.0 (70.8) months; the mean duration of the current symptoms (shown as duration 2) was 4.25 (1.61) months. The severity of depressive symptoms was assessed using the Hamilton depression rating scale (HDRS)11 and the Zung self rating depression scale (SDS).12 The mean HDRS score was 18.3 (8.64). The mean SDS score was 60.0 (15.3) (<40 = normal, 40–50 = borderline, >50 = depressed), and all patients except four were found unambiguously to be depressed (12 depressed, one borderline, three normal). Intellectual functions were evaluated using the mini-mental state examination (MMSE; ⩽23 = demented; >23 = normal)13 and no one was shown to be obviously demented. To assess visuoperceptual function, we carried out a facial identity discrimination test on each patient. Patients were shown pairs of photographs of women’s faces (modified from Nakamura et al14), which lacked non-facial cues such as hairstyle, and were asked to judge whether the two faces were the same or different. The percentages of correct answers are presented as the FI (%) in table 1. None of the depressed patients had any abnormalities of eyesight.

Table 1

 Background data on the depressed patients

The depressed patients were compared with controls who were not depressed and who had no history of neurological or psychiatric illness (n = 20, 10 men and 10 women, mean age 59.0 (14.7) years). There were no significant age differences between the depressed patients and the controls participating in each type of test. All the participants gave informed consent before testing.

Stimuli

To assess the ability of depressed patients and controls to recognise emotion from moving facial and prosodic stimuli, we used the same tasks that were used in the report by Kan et al,15 which were standardised in 76 normal young students and elicited more than 80% agreement. The facial stimuli were videotaped facial expressions of six basic emotions (happiness, sadness, anger, fear, surprise, and disgust), expressed by professional male and female actors. The stimuli depicted neutral–emotional–neutral changes in expression, with emotional expression lasting for two seconds. The faces were shown in the same size and in colour, on a 21×28 cm television screen. The prosodic stimuli were four semantically neutral sentences (such as “good morning”) and six short nonsense sentences read by the same actors, using tone to convey the six basic emotions. The number of characters in the nonsense sentences was identical across the stimuli, and the duration of the recorded voice was generally similar in each. The loudness of the stimuli was adjusted to the same volume using an audio recorder.

Experimental design

The experiment was conducted in a comfortable silent room. The stimuli were presented one at a time in randomised order, using a television or tape recorder as appropriate. The subjects were asked to select from a set of cards the one basic emotion that best described the emotional state represented in the video or tape recording. The subjects were instructed to consider all six alternatives carefully before responding. Before testing, all subjects were asked to explain the meaning of the six basic emotions to ascertain that they had understood the meaning of the word used to describe each of the six emotions.

RESULTS

Emotion recognition from moving facial stimuli

The percentage of correct responses for each emotion is shown in the upper panel in table 2. We compared performances using a repeated measures analysis of variance (ANOVA), with group (depressed patients and control) and emotion (the six basic emotions) as factors. There was a significant main effect for emotion (F(5,34) = 12.2, p<0.01), but no significant effect of group or group–emotion interaction. A post hoc comparison using least significant difference (LSD) analysis revealed that fear was recognised less accurately than the other five emotions (MSe = 0.02, p<0.05).

Table 2

 The percentage of correct responses for each emotion

Emotion recognition from prosodic stimuli

The lower panel in table 2 shows the emotion recognition from prosodic stimuli. A 2×6 (group×emotion) ANOVA with repeated measures revealed a significant main effect for emotion (F(5,34) = 5.29, p<0.01) and for the interaction between group and emotion (F(5,170) = 2.93, p<0.01). Further examination of the simple main effect showed that recognition of surprise by the depressed patients was impaired compared with the controls (F(1,34) = 5.58, p<0.01).

As the surprise recognition scores in the prosodic task were significantly lower in the depressed patients than in controls, we further examined the pattern of recognition errors. Table 3 shows the percentage of responses in the prosodic task for the depressed patients and controls, for each emotion. As demonstrated with ANOVA (see above), recognition of fear was more inaccurate than recognition of other emotions in both the depressed patients and the controls, and all subjects showed confusion in recognising fear as sadness or surprise. Surprise was usually erroneously recognised as fear by the depressed patients. To inspect the distribution of recognition responses, we used a dual scaling method. Dual scaling is an analytical strategy that establishes the optimal spacing (weights) of rows and columns of a data matrix as coordinates on principal axes.16 We increased the sample number by adding data from another eight normal controls and 76 normal young students to the analysis, in order to obtain a more appropriate solution and to facilitate a broad inspection of how the six emotions are recognised. We analysed an 18 (six emotions exposed to a group of controls and normal young students) × 6 (response categories) contingency table. Two solutions were employed to make interpretation easier. Although this analysis was confined to prosodic stimuli, we were able to obtain a configuration that was similar to analyses of facial stimuli presented in previous studies,16,17 and hedonic and arousal axes were applied to the space (fig 1). In fig 1, the normalised configuration of response categories is represented by large bold letters. Responses for each group are presented as lower case abbreviations. Close proximity among data points represents similarity between stimuli—that is, similar perception of emotions. The responses of each group to fear, anger, and sadness were clustered (fig 1). Therefore, as for the category of surprise, the depressed patients’ responses shifted more towards negative emotions. The response to happiness also shifted in a negative emotional direction, although the accuracy of depressed patients was not statistically significant (ANOVA).

Table 3

 The percentage of responses in the prosodic task for each emotion

Figure 1

 Resolution of emotion recognition with dual scaling. The large bold letters represent the normalised configuration of the response categories (H, happiness; Sa, sadness; A, anger; F, fear; Su, surprise; D, disgust). The lower case abbreviations represent the subjects’ responses (Dep, depressed patients; No, normal controls; Ny, normal young students).

Correlation between accuracy of recognising prosodic surprise and depressive backgrounds

In order to examine the effect of antidepressant drugs and depressive symptom severity on the inaccuracy of prosodic surprise recognition, we calculated correlation coefficients for the correct percentage of prosodic surprise with the medication dose, duration after onset, duration of current symptoms, HRDS score, and SDS score. None of these reached significance (the coefficients were 0.24, 0.15, −0.06, 0.12, and 0.31, respectively; all p>0.05).

DISCUSSION

In this study, we evaluated the ability of depressed patients to recognise six basic emotions in dynamic facial and prosodic stimuli. Despite previous reports that depression can cause deficits in recognition of emotions from facial expressions, depressed patients in our study performed normally in recognising emotions from moving facial stimuli. This inconsistency might reflect differences between previous studies and ours in the stimuli used. In our study, the stimuli were videotaped moving facial expressions that were thought to convey a substantial amount of information about the emotion being represented. Our results suggest that if sufficient information is available, depressed patients might be able understand facial expressions normally.

With regard to emotion recognition from prosodic stimuli, our depressed patients could not recognise surprise accurately. In view of the fact that the depressed patients recognised the other five emotions normally (including the most difficult one to recognise, fear), it is unlikely that they had trouble hearing. Considering the hedonic axes of two dimensional space for emotion,17 surprise was the only expression of neutral emotion among the six expressions of emotion used in the present study. Therefore, it is possible that these depressed patients could recognise both positive and negative prosodic emotions normally, but could not recognise neutral prosodic emotions as being neutral.

Dual scaling analysis revealed that the depressed patients confused surprise with more negative emotions, such as sadness, fear, disgust, or anger. Thus it appears that the recognition of neutral prosodic emotions is biased towards negative emotions. The correlation study showed that neither antidepressant drug treatment nor depressive symptom severity influenced the accuracy of the recognition of surprise. This implies that the negative bias is not a state, but a trait of depression, which is present in the patients consistently, regardless of drug treatment or the severity of the depressive symptoms.

Many brain damaged patients have been reported as having deficits in emotion recognition. Some patients with damage to the amygdala show impaired recognition of negative emotions, particularly fear.18 In addition, neurodegenerative disorders of the basal ganglia, such as Parkinson’s disease15,19 or Huntington’s disease,20 are associated with emotion recognition deficits. Frontal lobe injury can also cause severe emotional recognition deficits and difficulty in social communication.21,22 Unlike the situation with brain damaged patients, the damage to the brain in depressed patients has not been fully identified. However, there is some evidence that there is abnormal function of the frontal lobe23,24 or limbic structures25,26 in depressed patients. Mayberg et al showed that increased prefrontal blood flow is associated with decreased limbic-paralimbic blood flow, along with recovery from depression.27,28 By contrast, Liotti et al found that blood flow to the medial orbitofrontal cortex was decreased in both acutely ill patients and those in remission, suggesting that it is a trait marker of depression.29 Adolphs proposed that recognition of emotion draws on a distributed set of central nervous system structures which include the occipitotemporal neocortex, amygdala, orbitofrontal cortex, and right frontoparietal cortices.30 These distributed areas work in cooperation, but not as an absolute union, with the result that different types of brain damage produce various disorders.

In the present study, considering that depressed patients showed emotion and mode specific deficits, it is unlikely that they have problems with perception. Rather, it is possible that they are impaired in their judgment of emotion. Thus, if presented with clear and typical expressions of emotion—such as moving facial expressions—it seems likely that depressed patients will recognise emotion accurately. By contrast, if presented with less clear representations of emotion or with neutral emotions, depressed patients are more likely to consider the emotion as negative. This idea is understandable if depressed patients have frontal lobe dysfunction that would disrupt cognitive functions such as reasoning or thought, rather than dysfunction in those areas that are related more to perception. The impaired recognition of emotion in depressed patients would most probably be apparent in a complex social situation.

In summary, there are no differences in sensitivity between depressed patients and normal controls for the recognition of positive and negative emotions. With respect to emotion specificity, only the recognition of surprise was impaired in depressed patients. Moreover, with respect to mode specificity, the recognition of auditory stimuli was impaired. The problem then arises as to why the patients judged surprise to be more negative in prosodic stimuli, but not in facial stimuli. There are two possible interpretations: first, depression might affect only auditory processing of emotion. As several previous studies have reported impaired emotion recognition from visual stimuli in depressed patients, this interpretation is unlikely. Second, there might be a difference in the clarity of the facial and prosodic stimuli. The percentage of correct responses in the controls was 96.7% for the facial stimuli and 77.5% for the prosodic stimuli, so it is possible that the prosodic representation of surprise was unclear and did not carry sufficient information for patients to make an accurate judgment. This interpretation implies that the clarity of stimuli is an important factor influencing the ability of patients to recognise emotion. It suggests that positive and negative emotions, when presented less clearly, may also be misjudged. There is another problem: although surprise is a neutral emotion in view of the hedonic axes, it is a high arousal emotion in view of the arousal axes. As we did not include a purely “neutral” expression of emotion in this study, it remains unclear whether neutral stimuli with lower arousal levels are also interpreted as negative. Although further work is needed to solve these problems, our data suggest that depressed patients have a negative cognitive bias that produces a tendency to judge neutral emotions as negative.

Acknowledgments

This study was supported in part by a Showa University grant-in-aid for innovative collaborative research projects, a special research grant-in-aid for development of characteristic education from the Japanese Ministry of Education, Culture, Sports, Science and Technology, and a grant-in-aid for scientific research on priority areas (c) from the Japanese Ministry of Education, Culture, Sports, Science and Technology (No 15590910).

REFERENCES

Footnotes

  • Competing interests: none declared