The figure 3.1 below shown the overall general workflow of the face emotion recognition system and OpenCV will be applied through the whole process.
Figure 3.1 General flow of the system
This project will mainly focus on face detection and feature extraction and only one webcam will be used and mounted on a laptop so that the image frame can be extracted out from the video. After we get the image, we will proceed to another stage which is face detection to detect the human face from that image frame.
Since the entire image frame will not contain the same size of face, so skin detection will be applied in order to decrease the calculation time in finding ...view middle of the document...
In order to detect face and facial parts, four data sets which are human faces, eyes, nose and mouth were created. Firstly, those data sets will be calculated through Haar-like-features and then the features will be learned by AdaBoost algorithm. Later the classifiers will be applied in cascade structure in order to detect human face and facial parts. The database of face and facial parts will include positive images and negative images. Such as for face image database, 500 positive images (with face) and 500 negative images (without face) will be prepared same goes to others. All of the face and facial parts detection will be done in the skin pixel region.
To further improve the face finding accuracy, some conditions will be set because sometimes it might happens that such as human ear might be considered as nose region which is wrong. The conditions are the nose and mouth should be located under the eyes and the distance between eyes should be in an appropriate range. Figure 3.2 shows the process flow of this stage.
Figure 3.2 Diagram of pre-processing and face detection stage
In Figure 3.3, it will be the desired output such that face, eyes, nose and mouth are drawn with different colour region of interest.
Figure 3.3 ROI with different colour
After we get the output, we will proceed to the next stage which is feature extraction.
3.3 Feature Extraction
When the system detected the face and face parts successfully, face features need to be identified from previous output. From Literature Review section, the proposed system decides to use combination of geometric and appearance approach to extract the features instead using one approach. From previous stage which is face detection in section 3.2, the system will able to draw out the Region of Interest (ROI) of eyebrows, eyes, nose, mouth and face. Later, we will extract the feature out from these ROI.
3.3.1 Geometric approach
From the previous process, facial areas are well defined, for example within the mouth area, we will translate the mouth image to binary image and does another translation which we will convert the mouth image using canny edge detection. Both of the images will be combined and Figure 3.4.c shows the output. We can always get the two lips in angry, sadness, surprise, normal and disgust emotion. While for fear and happy emotion, we can distinguish then by saying the teeth often appears and there are more canny edge pixels will be detected in the mouth area.
Figure 3.4 Process of detecting lips; a) binary output b) Canny edge detection output c) Combination of binary and canny edge
Besides mouth, the portion of eyes and eyebrows will be identified based on some requirement such as the eyes must locate at 60% above from bottom border of the face. The eyes and eyebrows region will further separate into left side and right side. The next step we need to...