Keywords
|
Facial Expression Detection, Feature Extraction, Expression classification. |
INTRODUCTION
|
Over the past decades, human-computer interaction together with computer vision has been an important field in computer study.The direct communication between thecomputer and human beings is matter of concerned. Much research has been conducted on improving and developing the interaction between human and the computer. One of the significant factors that contributed to increasing and developing the interaction between the computer and humans is studying the computers' ability to distinguish facial expressions for human [2].The verbal part of a message contributes only 7% of its meaning as a whole; the vocal part contributes 38% while facial movement and the expression contributes 55% of the effect of that message and so one can say that the facial part does the major contribution in human communication [1].Though much progress has been made, recognizing facial expression with a high accuracy remains difficult due to the subtlety, complexity and variability of facial expressions. |
Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. There are two common approaches to extract facial features: geometric feature-based methods and appearance-based methods. Geometric features present the shape and locations of facial components, which are extracted to form a feature vector that represents the face geometry.With appearance-based methods, image filters, such as Gabor wavelets, are applied to either the whole-face or specific face-regions to extract the appearance changes of the face. |
Challenges in Face Expression Recognition System. |
It has already been stated that face expression recognition techniques have always been a very challenging task for researchers because of all difficulties and limitations. The challenges associated with face expression recognition can be attributed to following factors: |
Pose: The images of face changes due to relative camera face position such as frontal and non-frontal. Face may have a different angle so some of the facial features such as an eye or the nose may become partially or wholly occluded. To overcome this challenge implements good pre-processing techniques which are invariant to translation, rotation and scaling. |
Occlusion: Faces may be partially occluded by other objects. In an image if face is occlude by some other faces or objects such as mask, hair, glasses etc. For that image extraction of expression features are complex. |
Illumination: If the images are taken in different lights. Then expression feature can be detected inaccurately and hence recognition rate of facial expression is low. This factor would typically make feature extraction more difficult. To compensate the variation of illumination in an input image, image preprocessing methods like DCT normalization, Histogram Equalization, Rank Normalization can be applied before feature extraction [3]. |
LITERATURE SURVEY
|
As per the survey various methods for face detection can be grouped into four categories. Knowledge based methods, feature invarient approaches, templet matching methods and appearance based methods [16]. Knowledge based methods are rule based methods. These methods try to capture human knowledge of faces, and translate them into a set of rules. The feature invarient approach finds some invarient features for face recognition. The idea is to overcome the limits of our instinctive knowledge of faces. The templet matching methods compare input images with stored patterns of faces or features. |
The appearance based methods rely on techniques from statistical or probabilistic analysis and machine learning to find the relevant characteristics of face images. In general this method had been showing superior performance to the others. Statistical methods provide a way for estimating missing or uncertain information. The statistics works on a big set of data, we want to analyse that set in terms of the relationships between the individual points in the data set. PCA is the way of identifying patterns in the data, and expressing the data in such a way as to highlight their similarities and differences. The other main advantage of PCA is data compression, by reducing the number of dimensions, without much loss of information. Appearance based face recognition methods are PCA( Principle Component Analysis), LDA( Linear Discriminant Analysis), ICA( Independent Component Analysis) [16]. |
VARIOUS FACIAL EXPRESSION RECOGNITION METHODS
|
Facial expression recognition consists of three main steps. In the first step face image is acquired and detect the face regions from the images and pre-processing the input image to obtain the images that have a normalised size or intensity. Next is expression features are extracted from the observed facial image or image sequence. Then extracted features are given to the classifier and classifier provides the recognized expression as output. The block diagram of facial expression recognition system is given in fig. 1. |
The input image can be represented in various ways. If face image can be represented as a whole unit then it is called holistic representation. If face image can be represented as a set of features then it is called analytic representation. Face can also be represented as a combination of these two then is called hybrid approach. |
A. Face Detection |
Face Detection is the process of localizing and extracting the face region from the background. It involves segmentation, extraction, and verification of faces as well as facial features from an uncontrolled background. It follows two different approaches: Emotion detection from still images and Emotion detection from images acquired from a video [1]. |
B. Facial Feature Extraction |
Facial feature extraction is the process of translating the input data into some set of features. Use of feature extraction can help reduce huge amount of data to a relatively small set which is computationally faster. It is influenced by many complications like difference in different pictures of the same facial expression, the light directions of imaging, and the variety of posture, size and angle. Even to the same people, the images taken in different surroundings may be unlike [1].There are two types of features that are usually used to describe facial expression: Geometric Features and Analytic Features. |
1)Geometric Features: The features measure the displacements of certain parts of the face such as eyebrows or mouth corners. The facial components or facial feature points are extracted to form a feature vector that represents the face geometry. Geometry based method is that expressions affect the relative position and size of various features and that by measuring the movement of certain facial points the underlying facial expression can be determined. The task of geometric feature measurement is usually connected with face region analysis, especially finding and tracking crucial point in the face region. |
2) Appearance Features: The Features describe the change in face texture when particular action is performed such as wrinkles, bulges, forefront, regions surrounding the mouth and eyes. Image filters are used, applied to either the wholeface or specific regions in a face image to extract a feature vector. As per the study appearance based algorithms are of wide-range. These include Principal Component Analysis (PCA), Independent Component Analysis (ICA), Locality PreservingProjections (LPP), Linear Discriminate Analysis(LDA), Gabor wavelets, Local Binary Pattern (LBP). |
a. Principle Component Analysis (PCA): The main idea of PCA is to find the vectors which best account for the distribution of face images within the entire image space. These vectors define the subspace of face images which we call face space. In this approach the faces are represented as a linear combination of weighted eigenvectors called Eigen faces. The Eigen faces are nothing but the principle components of a distribution of faces. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This is called Eigen spaceprojection. Eigen space is calculated by identifying the eigenvectors of the covariance matrix derived from a set of facial images (vectors). |
b. Kernel PCA : This is also eigenvector based method, but this method uses nonlinear mapping |
c. Linear Discriminant Analysis (LDA) :LDA and the related fisher’s linear discriminant are methods used in statistics, pattern recognition and machine learning to find a linear combination of features which separates two or more classes of objects or events. The resulting combination may be used as a linear classifier. This is also eigenvector based method and a supervised linear map. |
d. Independent Component Analysis (ICA) : ICA is a computational method for separating a multivariable signal into additive subcomponents are non-Gaussian signals and are statistically independent from each other. ICA is a special case of blind source separation. |
e. Self-Organizing Map (SOM) : This is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two dimensional) discretizes representation of the input space of the training samples, called a map. SOM are different from ANN in the sense that they use a neighborhood function to preserve the topological properties of the input space. |
f. Gabor Wavelet Transform (GWT) :The Gabor Transform named after Dennis Gabor is a special case of short time Fourier- transform. This is biologically motivated linear filter. The Gabor transform of signal x(t) is defined by the formula; |
|
C. Expression Classification |
The next functional block is expression classification block which uses the features extracted from the previous block and tries to classify the features based on the similarities between the feature data. Classifiers like Artificial Neural Netwoks, linear classifiers etc are generally used for this [1].After the set of features are extracted from the face region are used in classification stage. The set of features are used to describe the facial expression. Classification requires supervised training, so the training set should consist of labelled data. Once the classifier is trained, it can recognize input images by assigning them a particular class label. |
Support Vector Machine (SVM) :SVMs are linear classifiers that maximizethe margin between the decision hyperplane and the examples in the training set. So, an optimal hyper-plane should minimize the classification error of the unseen test patterns. This classifier was first applied to face detection. |
Neural Networks : Many pattern recognition problems like object recognition, character recognition, etc. have been faced successfully by neural networks. These systems can be used in face detection in different ways. Some early researches used neural networks to learn the face and non-face patterns. They defined the detection problem as a two-class problem. The real challenge was to represent the “images not containing faces” class. Other approach is to use neural networks to find a discriminant function to classify patterns using distance measures. Some approaches have tried to find an optimal boundary between face and non-face pictures using a constrained generative model. |
DISCUSSION AND CONCLUSION
|
Development of an automated system that accomplishes facial expression recognition is difficult. Various approaches have been made towards robust facial expression recognition, applying different image detection, feature extraction, analysis and classification methods. This paper has briefly overviewed the methods of facial expression recognition. Feature extraction is important stage for expression recognition system because extracted feature are used for classification stage. Feature extraction for expression recognition using geometric features is more difficult because it depends on the shape and sizes of features so appearance based features are easier to extract. The list of references to provide more detailed understanding of the approaches described is enlisted. |
Figures at a glance
|
|
Figure 1 |
|
|
References
|
- Alka Gupta, M. L. Garg, “A Human Emotion Recognition System Using Supervised Self-Organizing Maps”, IEEE International Conferenceon Computing for Sustainable Globle Development (INDIACOM), 2014
- Ajit P. Gosavi, S. R. Khot, “Emotion Recognition Using Principle Component Analysis With Singular Value Decomposition”, InternationalConference on Electronics and Communication System( ICECS- 2014).
- MarryamMurtaza, Muhammad Sharif, MudassarRaza and Jamal Hussain Shah, “Analysis of Face Recognition under Varying Facia l Expression : A Survey”, The International Arab Journal of Information Technology, Vol 10, No. 4, July 2014.
- Caifeng Shan, Shaogang Gong, Peter W. McOwan, “Faciale expression recognition based on Local Binary Patterns: A comprehensive Study”,image and Vision Computing 27(2009) Elsevier.
- MuzammilAbdulrahman, Tajuddeen R. Gwadabe, Fahad J. Abdu, AlaaEleyan, “Gabor Wavelet Transform Based Facial ExpressionRecognitionUsing PCA and LBP”, 2014 IEEE 22nd Signal Processing and Communications Applications Conference (SIU 2014).
- Caifeng Shan, Shaogang Gong, Peter W. McOwam, “Robust facial expression recognition using local binary patterns”, IEEE international conerence on image processing 2005 ICIP volume 2.
- Farhan Bashar, Asif Khan, Faisal Ahmad, and MdHasanoulKabir, “ Robust facial expression recognition based on median ternarypaterns”,2013international conference on electrical information and computer technology(EICT).
- SukanyaSagarikaMeher, Pallavi Maben, “Face recognition and facial expression identification using PCA”,IEEE international advance computing conference(IACC)2014.
- Anima Majumdar, LaxmidharBehera, Venkatesh K. Subramanian, “Local Binary Pattern based Facial Expression Recognition using SelforganizingMap”, International Joint Conference on Neural Networks (IJCNN), July 6-11, 2014, Beijing, China.
- Nidhi N. Khatri, Zankhana H. Shah, Samip A. Patel, “Facial Expression Recognition: A Survey”, International Journal of Computer Scienceand Information Technologies (IJCSIT), Vol. 5(1), 2014, 149-152.
- RojaGhasemi, Maryam Ahmadi, “ Facial Expression Recognition Using Facial Effective Areas And Fuzzy Logic”, Iranian Conference onIntelegent System (ICIS),2014.
- Divyarajsinh N. Parmar, Brijesh B. Mehta, “ Face Recognition Methods & Applications”, International Journal of Computer Technology andApplications (IJCTA), Vol 4(1), 84-86.
- Mohammad Javed, Bhaskar Gupta, “Performance Comparition of Various Face Detection Techniques”, International Journal of ScientificResearch Engineering and Technology (IJSRET), Volume 2 Issue 1, pp 019-0027, April 2013.
- Kwok-Wai Wong, Kin-Man Lam, Wan-Chi Siu, “An efficient algorithm for human face detection and facial feature extraction under differentconditions”, The Journal of the Pattern Recognition Society 34 (2001) 1993-2004.
- SenthilRagavanValayapalayamKittusamy and VenkateshChakrapani,” Facial Expressions Recognition Using Eigenspaces”, Journal of Computer Science 8, 2012.
- Shamna P, Paul Augustine, Tripti C, “An exploratory survey on various face recognition methods using component analysis”, Internation Journal of Advanced Research in Computer and Communicaion Engineering (IJARCCE), vol 2, issue 5, may 2013.
|