Blog
About

187
views
0
recommends
+1 Recommend
1 collections
    4
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      Human Centric Facial Expression Recognition

      1 , 2 , 1

      Proceedings of the 32nd International BCS Human Computer Interaction Conference (HCI)

      Human Computer Interaction Conference

      4 - 6 July 2018

      Facial Expression Recognition, Emotion Recognition, Deep Learning, Convolutional Neural Network

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Facial expression recognition (FER) is an area of active research, both in computer science and in behavioural science. Across these domains there is evidence to suggest that humans and machines find it easier to recognise certain emotions, for example happiness, in comparison to others. Recent behavioural studies have explored human perceptions of emotion further, by evaluating the relative contribution of features in the face when evaluating human sensitivity to emotion. It has been identified that certain facial regions have more salient features for certain expressions of emotion, especially when emotions are subtle in nature. For example, it is easier to detect fearful expressions when the eyes are expressive. Using this observation as a starting point for analysis, we similarly examine the effectiveness with which knowledge of facial feature saliency may be integrated into current approaches to automated FER. Specifically, we compare and evaluate the accuracy of ‘full-face’ versus upper and lower facial area convolutional neural network (CNN) modelling for emotion recognition in static images, and propose a human centric CNN hierarchy which uses regional image inputs to leverage current understanding of how humans recognise emotions across the face. Evaluations using the CK+ dataset demonstrate that our hierarchy can enhance classification accuracy in comparison to individual CNN architectures, achieving overall true positive classification in 93.3% of cases.

          Related collections

          Most cited references 24

          • Record: found
          • Abstract: found
          • Article: not found

          Configural information in facial expression perception.

          Composite facial expressions were prepared by aligning the top half of one expression (e.g., anger) with the bottom half of another (e.g., happiness). Experiment 1 shows that participants are slower to identify the expression in either half of these composite images relative to a "noncomposite" control condition in which the 2 halves are misaligned. This parallels the composite effect for facial identity (A. W. Young, D. Hellawell, & D. C. Hay, 1987), and like its identity counterpart, the effect is disrupted by inverting the stimuli (Experiment 2). Experiment 3 shows that no composite effect is found when the top and bottom sections contain different models' faces posing the same expression; this serves to exclude many nonconfigural interpretations of the composite effect (e.g., that composites are more "attention-grabbing" than noncomposites). Finally, Experiment 4 demonstrates that the composite effects for identity and expression operate independently of one another.
            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face.

            In order to investigate the role of facial movement in the recognition of emotions, faces were covered with black makeup and white spots. Video recordings of such faces were played back so that only the white spots were visible. The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicated that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of these six emotions was also investigated using normally illuminated and spots-only displays. In both instances the results indicated that different facial regions are more informative for different emitions. The movement patterns characterizing the various emotional expressions as well as common confusions between emotions are also discussed.
              Bookmark
              • Record: found
              • Abstract: found
              • Article: not found

              Facial expressions of emotion (KDEF): identification under different display-duration conditions.

              Participants judged which of seven facial expressions (neutrality, happiness, anger, sadness, surprise, fear, and disgust) were displayed by a set of 280 faces corresponding to 20 female and 20 male models of the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Ohman, 1998). Each face was presented under free-viewing conditions (to 63 participants) and also for 25, 50, 100, 250, and 500 msec (to 160 participants), to examine identification thresholds. Measures of identification accuracy, types of errors, and reaction times were obtained for each expression. In general, happy faces were identified more accurately, earlier, and faster than other faces, whereas judgments of fearful faces were the least accurate, the latest, and the slowest. Norms for each face and expression regarding level of identification accuracy, errors, and reaction times may be downloaded from www.psychonomic.org/archive/.
                Bookmark

                Author and article information

                Contributors
                Conference
                July 2018
                July 2018
                : 1-12
                Affiliations
                [1 ] Faculty of Computer Science, University of Sunderland, Sunderland, SR1 3SD, UK
                [2 ] Faculty of Health, Sciences and Wellbeing, University of Sunderland, SR1 3QR, UK
                Article
                10.14236/ewic/HCI2018.44
                © Clawson et al. Published by BCS Learning and Development Ltd. Proceedings of British HCI 2018. Belfast, UK.

                This work is licensed under a Creative Commons Attribution 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

                Proceedings of the 32nd International BCS Human Computer Interaction Conference
                HCI
                32
                Belfast, UK
                4 - 6 July 2018
                Electronic Workshops in Computing (eWiC)
                Human Computer Interaction Conference
                Product
                Product Information: 1477-9358 BCS Learning & Development
                Self URI (journal page): https://ewic.bcs.org/
                Categories
                Electronic Workshops in Computing

                Comments

                Comment on this article