Detecting subtle facial expressions using the Euler motion magnification method

Number of pages: 118 File Format: word File Code: 31057
Year: 2014 University Degree: Master's degree Category: Computer Engineering
  • Part of the Content
  • Contents & Resources
  • Summary of Detecting subtle facial expressions using the Euler motion magnification method

    Master's Thesis of Faculty of Technology and Engineering

    Computer Department

    Abstract:

    In this thesis, a new method is presented to detect the subtle emotional states of the face. Euler's video zoom method has been used to reveal subtle facial movements. Euler's video zoom method enhances the subtle movements of signals (color change or transition motion) by temporal and spatial processing. The input of this method is videos of people's faces and the result is a sequence of enlarged images. Then we have examined the type of emotional state of the sequence of enlarged images using special faces. In this research, the experiment was conducted on 164 videos of 16 people in the database of subtle involuntary emotions. At first, we have examined the detection of changes in the face when subtle emotional states occur. The change in the face is detected on average by 65.19% when the negative state occurs, 68.29% when the positive state occurs, and 69.75% when the surprised state occurs. We have also evaluated the recognition and separation of two positive and negative states in the sequence of images. As the results show, the overall detection accuracy between negative and positive states using Euler video magnification is 66.66%. While without using this magnification method, the overall detection is reduced to 50.00%. Therefore, the use of Euler video magnification in retrieving imperceptible movements of the face when an emotion occurs increases the performance of the subtle emotional states detection method based on special faces as well as change detection.

    Key words: detection of subtle emotional states of the face, facial change detection, Euler video magnification method, special faces method

    Chapter 1

    Introduction

    1 Preface

    Change detection is one of the new researches in the field of image processing and machine vision and has been growing in recent years. Change detection is a time segmentation process that is performed on images and videos taken at different times. Image change detection techniques play a fundamental role in video surveillance, medical imaging, underwater imaging, and traffic monitoring for automatic detection of moving objects and targets. One of the important applications in this field is the detection of changes in the face when emotions appear, which plays an important role in human communication. Therefore, recognition of facial emotional expressions (FER[1]) has not only been studied by many psychological researchers, but has also been proposed in the field of computer science. Computer scientists have made many efforts to identify and recognize real human facial expressions.

    Facial expressions play an essential role in verbal and non-verbal communication and conveying opinions between people. Facial expressions express the actions and reactions that humans show towards their surroundings. Recognition of facial expressions can be useful in many research and application fields, including lip reading, video conferencing, robotics, psychological studies, etc.

    In the field of robotics, there are humanoid robots that interact with the user and adapt to their mood. Also, in the field of psychology, tests have been designed in the field of therapy, by showing pictures to the patient, the reactions on his face are checked and the mental state of the person is analyzed. The discussion of simulating the emotional states of the face is also important in visual and cartoon animations. These cases and various other applications have led to many researches in this field. 1-2 Statement of the problem of detecting changes in the image In this section, the problem of detecting changes is presented in this way. The ultimate goal of detecting changes is to identify the set of pixels connected to each other in one frame that are different from the next frame. The set of pixels contains the change mask. The purpose of this problem is to identify this mask and segment the background.

    This mask will be a combination of several basic factors, including the movement of the object towards the viewer, the change in the shape of objects and their appearance, and the disappearance of objects. Also, objects can change with different light intensity or color. The main challenge is that the change mask should not contain unknown and incorrect change forms caused by camera movement, light changes and weather conditions..

    In order to detect the change, the M images taken by the camera every second are displayed as each pixel has coordinates and color. The values ??of k and l depend on the image type. For example, it is used for grayscale images and for RGB images. It is also considered for aerial images or satellite surveillance, or for medical or biological information. The mathematical technique to detect the change of each pixel for the input image is as follows"Chhabra et al., 2009"

    If there is a significant change in pixel x of I(x). A=

    (1-1) (formulas are available in the main file)

    The facial emotional state detection system usually consists of a sequential structure of processing blocks" Khatri et al., 2014". The main blocks of this structure include pre-processing [2], feature extraction [3], classification [4] and post-processing [5]. In the pre-processing stage, actions such as noise reduction, normalization with respect to brightness fluctuations or normalization of dimensions and accuracy of images are performed. Feature extraction is a very important step in face recognition. Feature extraction methods can be divided into two categories"Tian et al., 2005" feature-based methods[6], appearance-based methods[7]. In feature-based methods, the geometric features of the face such as the location and shape of the facial components (eyes, nose, mouth and eyebrows) are important. Face components or face features are extracted in the form of a feature vector that represents the geometry of the face. Experimental results have shown that emotional features are not always reliably recognized, the reasons for which can be attributed to image quality, light, etc. pointed out In the appearance-based methods, the skin texture is paid attention to and features are extracted with the help of wrinkles and furrows that appear in different parts of the face during emotional states. The appearance-based method is performed in the entire face space or in a specific area of ??it using optical flow [8] to calculate the movement of facial muscles "Tian et al., 2002". This method is suitable for low quality images. In some articles presented in this field, a combination of appearance-based methods and feature-based methods (hybrid methods) have been used for feature extraction. The features extracted in this step are used as input to a classifier system. The output of the classifiers is generally one of the 6 main states defined by Ekman, "Ekman, Friesen, 1987", i.e. happiness, sadness, surprise, anger, fear and disgust Figure 1-2. They have defined the movement unit set (AU[9]) by dividing each movement into a series of elementary movements. In this method, the movements of the facial parts are tracked. Then, their changes are categorized in the facial expression coding system (FACS[10]) and emotions are interpreted. The purpose of the post-processing stage is to improve the recognition accuracy.

    4 Objectives of this model and instructions

    Most studies of facial expressions have been conducted on databases where all people pretended to express emotions. A number of studies have been conducted to aid in the detection of very subtle emotions in computer vision (Pfister et al., 2011). For example, the subtle movements that occur on people's faces when they feel emotions are not visible to the eye. Therefore, the purpose of this research is to use Euler's video zooming method to detect partial human emotional states in order to show the subtle movements of the face when emotions appear in a person. Then we determine the type of emotional state of the resulting data using the method of special faces [11] and check the effectiveness of Euler's video zooming method. It is clear that fine-grained emotion datasets [12] are needed to achieve more accurate and practical results. Therefore, the data collected by Li et al., 2013 have been used.

    (Images are available in the main file)

    The structure of the thesis is organized as follows. Also, the types of databases will be introduced and explained in this context.

    In chapter 4, the method of special faces is analyzed.

    In chapter 5, Euler's video zooming method is described.

  • Contents & References of Detecting subtle facial expressions using the Euler motion magnification method

    List:

    List of symptoms.. Q

    List of tables k

    List of figures. L

    Chapter One: Introduction 1

    1-1 Preface. 2

    1-2 statement of the problem of detecting changes in the image. 3

    1-3 statement of the problem of recognizing facial emotional states. 4

    1-4 Purpose of this template and instructions. 5

    Chapter Two: Explanation of the system for recognizing facial emotional states. 7

    2-1 Factors affecting facial emotional recognition. 8

    2-2 Division of facial expressions analysis systems. 10

    2-3 methodology for detecting emotional states. 10

    2-4 Face recognition and pre-processing operations. 12

    2-5 extraction of emotional features. 12

    2-5-1 Methods based on geometric features. 12

    2-5-2 methods based on appearance. 13

    2-6 Classification of emotions. 15

    2-6-1 methods based on arbitration. 16

    2-6-2 Sign-based methods. 16

    The third chapter: Research methods and background of facial emotional state recognition. 17

    3-1 facial features. 18

    3-2 Analysis of facial expressions. 19

    3-3 models for recognizing facial emotional states. 20

    3-4 review of past research. 24

    3-4-1 Review of the history of recognizing facial emotional states based on movement units in the FACS system 24

    3-4-2 Review of the history of recognizing facial emotional states based on optical flow 28

    3-4-3 Review of the history of recognizing facial emotional states based on special faces and PCA 31

    3-4-4 Review of the history of state recognition Emotional face based on FCP. 32

    3-4-5 A review of the background of recognizing facial emotional states using various other methods. 32

    3-4-6 An overview of the background of 3D facial recognition. 34

    3-4-7 An overview of the background of subtle facial expressions. 36

    3-5 databases. 41

    3-5-1 Cohn-Kanade database. 42

    3-5-2 AR database. 43

    3-5-3 MMI emotion expression database 43

    3-5-4 Involuntary emotion database. 44

    3-5-5 Japanese Female Emotion Expression Database (JAFFE) 44

    3-5-6 FG_Net Emotion and Gesture Recognition Database 45

    3-5-7 CMU AMP Facial Emotion Database. 45

    3-5-8 three-dimensional database of facial expressions 45

    3-5-9 database of subtle involuntary emotions (SMIC) 47

    Chapter four: recognition of facial emotional states by special face method. 48

    4-1 Special faces. 49

    4-2 Generals of face recognition system based on special faces. 50

    4-3 Calculation of special figures. 52

    4-4 Dimension reduction in appearance-based methods. 52

    4-5 principal components analysis. 53

    4-6 Calculation of eigenvalues ??and eigenvectors in the method of eigenfaces 54

    Chapter 5: Euler video zoom to recover subtle changes in the world. 56

    5-1 Euler video zoom. 57

    5-2 multi-scale analysis. 64

    5-3 The issue of sensitivity to noise. 68

    5-4 Comparison of Euler's video zooming method against Lagrangian method. 70

    5-5 Error calculation in Euler video zoom method and Lagrangian method. 70

    5-5-1 Error calculation in Euler video zoom method and Lagrangian method in noiseless mode. 70

    5-5-2 Error calculation in Euler video zoom method and Lagrangian method in noisy mode. 72

    5-6 final conclusion. 73

    The sixth chapter: the proposed method. 74

    6-1 Overview of the research. 75

    6-2 Using special faces in recognizing emotional states. 75

    6-3 Detecting subtle emotional states using Euler's video magnification method and special faces method 76

    6-3-1 Investigating the detection of changes in the face when subtle emotional states occur. 79

    6-3-2 Checking the recognition of subtle emotional states of the face (only one positive and negative state of each person). 86

    6-3-3 Checking the recognition of subtle facial expressions (several positive and negative expressions of each person). 88

    4-6 Summary. 90

    References. . 92

    Latin references. 92

    Persian references. 102

    English abstract. 103

     

     

    Source:

    Latin references

    Khatri, N., Shah, H., Patel, A., Facial Expression Recognition A Survey, (IJCSIT) International Journal of Computer Science and Information Technologies, vol. 5, No.1, pp.149-152, 2014.

    Chibelushi, C., Bourel, F., Facial Expression Recognition A Brief Tutorial Overview,, Facial Expression Recognition A Brief Tutorial Overview, Staffordshire University, On-Line Compendium of Computer Vision, vol. 9, 2003.

    Tian, ??Y., Kanade, T. and Cohn, J. F., Handbook of Face Recognition, chapter 11. Facial Expression. Analysis, Springer, New York, NY, USA, 2005.

    Tian,Y. l. Kanade, T., and Cohn, J. F., Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity, in Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, pp. 229-234, 2002.

    Ekman, P., Friesen, W.V., Facial Action Coding System (FACS). Palo Alto, Consulting Psychologists Press. 1978.

    Martinez, A., Du, S., A Model of the Perception of Facial Expressions of Emotion by Humans Research Overview and Perspectives, Journal of Machine Learning Research, Vol.13, No.1, pp.1589-1608, 2012.

    Pfister, T., Li, X., Huang, X., Zhao, G., Pietik?inen, M., Recognizing Spontaneous Facial Micro-expressions, IEEE Conference on Computer Vision (ICCV), Barcelona, ??pp. 1449 - 1456, 2011.

    Li, X., Pfister, T., Huang, X., Zhao, G., Pietik?inen, M., A Spontaneous Micro-expression Database Inducement, Collection and Baseline, 10th IEEE International Conference And Workshops On Automatic Face and Gesture Recognition (FG), Shanghai, pp.1-6, 2013.

    Nidhi N. Khatri, Zankhana H. Shah, Samip A. Patel, Facial Expression Recognition A Survey, (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 5, No.1, pp.149-152, 2014.

    Bettadapura, V., Face Expression Recognition and Analysis The State of the Art, Computer Vision and Pattern Recognition, pp.10-15, 2012.

    Fernandes, S. L., Josemin Bala, Dr. G., A Comparative Study On ICA And LPP Based Face Recognition Under Varying Illuminations And Facial Expressions, International Conference on Signal Processing Image Processing & Pattern Recognition (ICSIPR), pp.122-126, 2013.

    Zhang, S., Zhao, X., Lei, B., Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis, Wseas Transactions On Signal Processing, Vol. 8, No. 1, pp.21-31, 2012.

    Hong, J.W., Song, K., Facial Expression Recognition Under Illumination Variation, IEEE Workshop on Advanced Robotics and Its Social Impacts, pp.1-7, 2007.

    Mistry, J., Mahesh, Goyani, M.M., A literature survey on Facial Expression Recognition using Global Features, International Journal of Engineering and Advanced Technology (IJEAT), Vol.2, No.4, pp.653-657, 2013.

    Eisert, P., Girod, B., Analyzing Facial Expressions for Virtual Conferencing, IEEE Computer Graphics & Applications, Vol.18, No.5, pp. 70-78,1998.

    Zhang, Z., Feature-Based Facial Expression Recognition Sensitivity Analysis and Experiments With a Multi-Layer Perceptron, International Journal of pattern Recognition and Artificial Intelligence, Vol.13, No.6, pp.893-911, 1999.

    Steffens, J., Elagin, E., Neven, H., PersonSpotter-fast and robust system. for human detection, tracking and recognition, 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 516-521, 1998.

    Sinha, P., Perceiving and Recognizing Three-Dimensional Forms, Ph.D. dissertation, M. I. T., Cambridge, MA, 1995.

    Anderson, K., McOwan, P.W., Robust real-time face tracker for use in cluttered environments, Computer Vision and Image Understanding, Published by Elsevier, Vol. 95, No.2, pp.184–200, 2004.

    Li, H., Roivainen, P., Forchheimer, R., 3-d motion estimation in model-based facial image coding. IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.15, No.6, pp.545–555, 1993.

    Terzopoulus, D., Waters, K., Analysis and synthesis of facial image sequences using physical and anatomical models, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.15, No.6, pp.569–579, 1993.

    Essa, I., Analysis, Interpretation, and Synthesis of Facial Expressions.

Detecting subtle facial expressions using the Euler motion magnification method