進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-0208201612010000
論文名稱(中文) 實現基於九大痛苦因子之表情與動作理解系統
論文名稱(英文) Understanding System on Facial Expression and Action for SUFFERING Factors
校院名稱 成功大學
系所名稱(中) 電機工程學系
系所名稱(英) Department of Electrical Engineering
學年度 104
學期 2
出版年 105
研究生(中文) 邱正平
研究生(英文) Jeng-Ping Chiu
學號 N26031403
學位類別 碩士
語文別 英文
論文頁數 51頁
口試委員 指導教授-王駿發
口試委員-吳宗憲
口試委員-蔡安朝
口試委員-王家慶
口試委員-官大文
中文關鍵字 九大痛苦因子  Suffering Unit (SU)  臉部情緒辨識  動作偵測  HC-KNN  AMASW 
英文關鍵字 SUFFERING factors  Suffering Unit (SU)  Facial Expression Recognition  Action Detection  HC-KNN  AMASW 
學科別分類
中文摘要 近年來,在不同領域的智慧型人機介面之應用與需求日漸增加。對於人類情緒的理解,不再拘泥於文字、語音、觀察等方式。在影像辨識技術不斷的進步下,人臉情緒辨識與動作辨識相關技術已經廣泛應用於居家機器人、監控儀器、行為分析等領域。
在本篇論文中提出一個九大痛苦因子的理解系統,嘗試理解更深層的負面情緒。人的情緒無法單以表情或動作來表示,本系統結合臉部情緒辨識與動作偵測並提出有別Action Unit(AU)的Suffering Unit (SU),以臉部情緒單元和動作單元組成。系統由Kinect v2擷取待測者全身影像後,系統便執行臉部情緒辨識與動作偵測,並將其行為即時分類到九大痛苦因子類別。在臉部情緒辨識部分,本篇論文提出一個基於改進最鄰近搜尋的演算法Hierarchy-Coherence K nearest neighbor (HC-KNN)以計算訓練樣本的關聯性提升辨識效能。在動作偵測方面,本篇論文提出一個Average Moving Action Status Window (AMASW)的動作偵測系統架構。藉由上述之理解系統,我們可以辨識出包含表情和動作之19種SU並對應到九大痛苦因子。實驗結果顯示本系統可有效辨識臉部情緒和動作偵測,辨識率分別可以達到87.74%、90.81%
英文摘要 In recent years, the application and demand of intelligent human-machine interface are increased gradually. The technique for understanding the human emotion was no longer restricted to analysis the text, voice, observation and so on. With the rapidly improvement of the pattern recognition, facial expression and activity recognition technology has been widely used in the field of home care robotics, monitoring equipment and human behavior analysis.
This thesis proposes an understanding system for SUFFERING factors to interpret the negative emotions, since human’s feeling cannot be represented just by facial expressions or actions. Our proposed system composed of facial expression recognition and action detection, respectively. Compared with the Action Unit (AU), this work proposes a novel Suffering Unit (SU), the SU consists of facial and posture action units. After capturing the whole body from Kinect v2, the system performs the recognition for both facial expression as well as action and output the results in real time. The proposed Hierarchy-Coherence K Nearest Neighbor (HC-KNN) calculate the coherence of training data and can improve the performance in comparison of KNN in facial expression recognition. On the other hand, an Average Moving Action Status Window (AMASW) is also proposed to build our action detection system. With the proposed understanding system, we can identify SUFFERING factors by SU which contains 19 kinds of facial expressions and actions. The experimental results have demonstrated the effectiveness of the proposed system, the recognition rate can achieve 87.74% for facial expression and 90.81% for action, respectively.
論文目次 中文摘要 I
Abstract II
誌謝 IV
Contents V
Table List VII
Figure List VIII
Chapter 1 Introduction 1
1.1 Background 1
1.2 Motivation 2
1.3 Thesis Objective 3
1.4 Thesis Organization 3
Chapter 2 Related Works 4
2.1 The Survey of Facial Expression Recognition 4
2.1.1 Approaches of Facial Expression Recognition 4
2.1.2 Color Space 6
2.2 The Survey of Action Detection 7
2.2.1 Approaches of Action Detection 7
2.2.2 Layer of Action Detection 7
2.2.3 Sensor of Action Detection 8
Chapter 3 Proposed Understanding System for SUFFERING Factors 10
3.1 SUFFERING Factors and Proposed Suffering Unit (SU) 10
3.2 Proposed Methods of Facial Expression Recognition 14
3.2.1 Detector for Salient Patches Registration 15
3.2.2 Gray Scale & Normalization 17
3.2.3 Adjusting Image tilt and Limiting Image Rotation 18
3.2.4 Reducing Dimension Based On PCA 19
3.2.5 Proposed Hierarchy-Coherence K Nearest Neighbor 22
3.3 Proposed Methods of Action Detection 27
3.3.1 Related position feature for action 28
3.3.2 Average Moving Action Status Window (AMASW) 29
3.3.3 Criteria of Action Status 30
3.3.4 Detectors for actions 33
3.3.5 Fist Detection System 34
3.4 SUFFERING Voting Tree 36
Chapter 4 Building Database for SUFFERING Factors 38
4.1 SUFFERING Video Environment 41
4.2 Data analysis 41
Chapter 5 Experimental Result and System Application 42
5.1 Experiment Result for Facial Expression(SU1~SU7) 42
5.2 Experiment Result for Action Detection (SU8~SU19) 45
Chapter 6 Conclusion and Future Works 47
Chapter 7 References 48

參考文獻 [1] P. Ekman, "Basic emotions," Handbook of cognition and emotion, vol. 98, pp. 45-60, 1999.
[2] J.-F. Wang, B.-W. Chen, Y.-Y. Chen, and Y.-C. Chen, "Orange computing: challenges and opportunities for affective signal processing," in Signal Processing, Communications and Computing (ICSPCC), 2011 IEEE International Conference on, pp. 1-4, 2011.
[3] A. R. Abbasi, T. Uno, M. N. Dailey, and N. V. Afzulpurkar, "Towards knowledge-based affective interaction: situational interpretation of affect," in Affective Computing and Intelligent Interaction, ed: Springer, pp. 452-463, 2007.
[4] S. H. Lee, K. Plataniotis, N. Konstantinos, and Y. M. Ro, "Intra-class variation reduction using training expression images for sparse representation based facial expression recognition," Affective Computing, IEEE Transactions on, vol. 5, pp. 340-351, 2014.
[5] S. Taheri, V. M. Patel, and R. Chellappa, "Component-based recognition of facesand facial expressions," Affective Computing, IEEE Transactions on, vol. 4, pp. 360-371, 2013.
[6] M. S. Ryoo and J. K. Aggarwal, "Recognition of composite human activities through context-free grammar based representation," in Computer vision and pattern recognition, 2006 ieee computer society conference on, pp. 1709-1718, 2006.
[7] W.-H. Ong, T. Koseki, and L. Palafox, "Unsupervised human activity detection with skeleton data from rgb-d sensor," in Computational Intelligence, Communication Systems and Networks (CICSyN), 2013 Fifth International Conference on, pp. 30-35, 2013.
[8] G. Yang and T. S. Huang, "Human face detection in a complex background," Pattern recognition, vol. 27, pp. 53-63, 1994.
[9] T. A. McGregor, R. L. Klatzky, C. Hamilton, and S. J. Lederman, "Haptic classification of facial identity in 2D displays: Configural versus feature-based processing," Haptics, IEEE Transactions on, vol. 3, pp. 48-55, 2010.
[10] C.-T. Tu and J.-J. J. Lien, "Automatic location of facial feature points and synthesis of facial sketches using direct combined model," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 40, pp. 1158-1169, 2010.
[11] K. Sandeep and A. Rajagopalan, "Human Face Detection in Cluttered Color Images Using Skin Color, Edge Information," in ICVGIP, 2002.
[12] R. Brunelli and T. Poggio, "Face recognition: Features versus templates," IEEE Transactions on Pattern Analysis & Machine Intelligence, pp. 1042-1052, 1993.
[13] A. K. Jain, Y. Zhong, and M.-P. Dubuisson-Jolly, "Deformable template models: A review," Signal processing, vol. 71, pp. 109-129, 1998.
[14] B. Moghaddam and A. P. Pentland, "Face recognition using view-based and modular eigenspaces," in SPIE's 1994 International Symposium on Optics, Imaging, and Instrumentation, pp. 12-21, 1994.
[15] Y. Fu and N. Zheng, "M-face: An appearance-based photorealistic model for multiple facial attributes rendering," Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, pp. 830-842, 2006.
[16] F. Dornaika and F. Davoine, "On appearance based face and facial action tracking," Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, pp. 1107-1124, 2006.
[17] K.-K. Sung and T. Poggio, "Example-based learning for view-based human face detection," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, pp. 39-51, 1998.
[18] M. Kirby and L. Sirovich, "Application of the Karhunen-Loeve procedure for the characterization of human faces," IEEE Transactions on Pattern Analysis and Machine Intelligence,, vol. 12, pp. 103-108, 1990.
[19] J. Miao, B. Yin, K. Wang, L. Shen, and X. Chen, "A hierarchical multiscale and multiangle system for human face detection in a complex background using gravity-center template," Pattern Recognition, vol. 32, pp. 1237-1248, 1999.
[20] 管倖生 and 童鼎鈞, "CIELAB, CMC, BFD, CIE94 色差公式之績效評估-以 ABS 塑膠材料為例," 設計學報 (Journal of Design), vol. 7, 2009.
[21] J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, et al., "Real-time human pose recognition in parts from single depth images," Communications of the ACM, vol. 56, pp. 116-124, 2013.
[22] H. A. Alabbasi, F. Moldoveanu, and A. Moldoveanu, "Real Time Facial Emotion Recognition using Kinect V2 Sensor," IOSR Journal of Computer Engineering (IOSR-JCE) vol. 17, pp. 61-68, May-Jun 2015.
[23] D. Comaniciu and P. Meer, "Mean shift: A robust approach toward feature space analysis," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 24, pp. 603-619, 2002.
[24] O. Dictionaries, "Oxford dictionaries," Oxford University Press. http://oxforddictionaries. com/definition/english/VAR. Accessed Oct, vol. 15, p. 2012, 2010.
[25] W. Lee, H. Lee, and J. Chung, "Wavelet-based FLD for face recognition," in Circuits and Systems, 2000. Proceedings of the 43rd IEEE Midwest Symposium on, pp. 734-737, 2000.
[26] P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features," in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1,pp. I-511-I-518, 2001.
[27] E. Murphy-Chutorian and M. M. Trivedi, "Head pose estimation in computer vision: A survey," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, pp. 607-626, 2009.
[28] C. Kanan and G. W. Cottrell, "Color-to-grayscale: does the method matter in image recognition?," PloS one, vol. 7, p. e29740, 2012.
[29] M. Bicego, U. Castellani, and V. Murino, "Using Hidden Markov Models and wavelets for face recognition," in Image Analysis and Processing, 2003. Proceedings. 12th International Conference on, pp. 52-56, 2003.
[30] Y.-Y. Ou, P.-Y. Shih, T.-W. Kuan, S.-H. Shih, J.-F. Wang, and J.-S. Wu, "A happiness-oriented home care system for elderly daily living," in Orange Technologies (ICOT), 2014 IEEE International Conference on, pp. 193-196, 2014.
[31] P. Dollár, V. Rabaud, G. Cottrell, and S. Belongie, "Behavior recognition via sparse spatio-temporal features," in Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2005. 2nd Joint IEEE International Workshop on, pp. 65-72, 2005.
[32] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, "Multi-pie," Image and Vision Computing, vol. 28, pp. 807-813, 2010.
[33] R. A. Thamm, "The classification of emotions," in Handbook of the sociology of emotions, ed: Springer, pp. 11-37, 2006.
[34] C. Schüldt, I. Laptev, and B. Caputo, "Recognizing human actions: a local SVM approach," in Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, pp. 32-36, 2004.
[35] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri, "Actions as space-time shapes," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, pp. 2247-2253, 2007.
[36] H. Kuehne, H. Jhuang, R. Stiefelhagen, and T. Serre, "HMDB51: A large video database for human motion recognition," in High Performance Computing in Science and Engineering ‘12, ed: Springer, pp. 571-582, 2013.
[37] J. Sung, C. Ponce, B. Selman, and A. Saxena, "Human Activity Detection from RGBD Images," plan, activity, and intent recognition, vol. 64, 2011.
[38] B. Ni, G. Wang, and P. Moulin, "Rgbd-hudaact: A color-depth video database for human daily activity recognition," in Consumer Depth Cameras for Computer Vision, ed: Springer, pp. 193-208, 2013.
[39] K. Yun, J. Honorio, D. Chattopadhyay, T. L. Berg, and D. Samaras, "Two-person interaction detection using body-pose features and multiple instance learning," in Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, pp. 28-35, 2012.
[40] M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, "Coding facial expressions with gabor wavelets," in Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE International Conference on, pp. 200-205, 1998.
[41] T. Kanade, J. F. Cohn, and Y. Tian, "Comprehensive database for facial expression analysis," in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, pp. 46-53, 2000.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2030-08-01起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw