進階搜尋


下載電子全文  
系統識別號 U0026-1408201821121300
論文名稱(中文) 由表情辨識情緒之陪伴機器人
論文名稱(英文) A Companion Robot for Emotion Recognition by Facial Expressions
校院名稱 成功大學
系所名稱(中) 工程科學系
系所名稱(英) Department of Engineering Science
學年度 106
學期 2
出版年 107
研究生(中文) 孫佾微
研究生(英文) Yi-Wei Sun
學號 N96064426
學位類別 碩士
語文別 中文
論文頁數 79頁
口試委員 口試委員-王宗一
口試委員-侯廷偉
口試委員-王榮泰
口試委員-陳澤生
口試委員-吳村木
指導教授-周榮華
中文關鍵字 臉部情緒辨識  人臉偵測  機器學習  卷積神經網路  陪伴機器人 
英文關鍵字 Facial expression recognition  face detection  machine earning  convolutional neural network  companion robot 
學科別分類
中文摘要 本論文旨為研製一款陪伴型機器人,此機器人可以由使用者臉部表情影像辨識情緒,做出不同反應安撫使用者,達到陪伴效果。本論文與另一位同學之論文合作完成陪伴機器人,本文負責機器人臉部影像辨識部分。機器人以dsPIC30F4011微控制晶片,驅動馬達控制耳朵與手部動作,利用喇叭撥放不同音樂或預錄對話、使用4.3寸液晶顯示模塊製做動畫眼睛,藉此與使用者互動或達到安撫效果。
先利用Harr分類器,尋找到人臉後進行打招呼詢問使用者心情,以利快速且達到高準確辨識臉部表情。臉部表情影像辨識使用深度學習領域中的卷積神經網路(Convolutional Neural Networks, CNN)建立臉部情感預測模型。
本研究先以FER2013基本表情資料庫,建立基本情緒辨識模型,來驗證本研究設計的方法對於表情的辨識能力,在FER2013提供之測試資料集下的準確率達88%,證明本研究機器人具有一定的預測能力,可以與人有良好的互動。
英文摘要 A companion robot is designed and implemented in this thesis. The robot can recognize its user’s emotions through facial expressions. Thus, it can react to the user in different ways according to the emotions to comfort its user.
The controller of the robot is microcontroller dsPIC30F4011 which is used to drive motors for arm, ear, and neck motions. It also controls a speaker to play healing music and pre-recorded dialog, and a 4.3-inch LCD module to express eye patterns of emotion.
First, the Harr classifier is used to find the user face by the robot. Then, the robot welcomes its user by greetings, followed by emotion recognition through facial expressions. Afterwards, the robot comforts its user by its motion and speaking.
For the recognition of facial emotion, the convolution neural network (CNN) of depth learning was used to build the facial emotion prediction model for its excellent performance in image processing. The emotion recognition model was first built with the FER2013 basic expression database to train and verify the ability of the model. Then, its applicability was examined for the facial expression explored in this study. The accuracy rate of identification under the test data set provided by FER2013 is up to 88%. Proof this mood prediction model under different individual differences, it still have a certain ability to predict.
論文目次 摘要 I
Extended Abstract II
致謝 IX
目錄 X
圖目錄 XIII
表目錄 XVII
第一章 序論 1
1.1前言 1
1.2研究動機 3
1.3文獻回顧 3
1.3.1陪伴機器人文獻回顧 3
1.3.2卷積神經網路情緒辨識文獻回顧 8
1.4論文架構 19
第二章 系統架構與軟硬體介紹 20
2.1整體系統架構 20
2.2系統硬體介紹 22
2.2.1微處理器dsPIC30F4011 22
2.2.2直流降壓模組 24
2.2.3直流馬達 25
2.2.4馬達驅動晶片 TA7291P 27
2.2.5光耦合晶片 PC817 29
2.2.6揚聲器、麥克風與網路攝影機 30
2.2.7 4.3寸 HMI觸控液晶顯示模塊 32
2.3機器人之機構設計 36
2.3.1機器人內部設計 36
2.4軟體規格 39
第三章 臉部情緒辨識與回應 40
3.1 ID辨識 40
3.1.1 Haar分類器 41
3.2 FER2013資料庫 43
3.3卷積神經網路 44
3.3.1卷積層 45
3.3.2激活函數 46
3.3.3池化層 47
3.3.4全連接層 48
3.3.5 Dropout 49
3.3.6模型訓練 49
3.3.7模型情緒預測結果與分析 50
3.4整體流程架構 50
3.5人臉辨識程式規劃 53
3.5.1臉部辨識 53
3.6臉部情緒辨識程式規劃 54
3.6.1 FER2013資料庫預處理 55
3.6.2建立卷積神經網路架構 57
3.6.3模型訓練 58
第四章 實驗結果與討論 60
4.1實驗方法 60
4.1.1 FER2013測試集預測 60
4.2結果與討論 60
4.3機器人動作和電腦畫面顯示結果 66
第五章 結論與建議 73
5.1結論 73
5.2建議 74
參考文獻 75
參考文獻 [1] https://www.101newsmedia.com/m/news/46233, April,2018
[2] R. Aminuddin, A. Sharkey and L. Levita, “Interaction with the Paro Robot May Reduce Psychophysiological Stress Responses,” 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 593-594, 2016
[3] W.-L. Chang, S. Šabanović and L. Huber, “Use of Seal-Like Robot PARO in Sensory Group Therapy for Older Adults with Dementia,” IEEE 13th International Conference on Rehabilitation Robotics (ICORR), pp. 101-102, 2013
[4] http://tpcjournal.taipower.com.tw/article/index/id/181, November, 2017
[5] http://technews.tw/2017/01/05/bosch-startup-mayfield-robotics-announced-home-robot-kuri-at-2017-ces. January,2017
[6] H. Ahn and M. Lee, "Is Entertainment Services of a Healthcare Service Robot for Older People Useful to Young People?", 2017 First IEEE International Conference on Robotic Computing (IRC), 2017.
[7] M. Vincze, W. Zagler and L. Lammer, "Towards a Robot for Supporting Older People to Stay Longer Independent at Home", ISR/Robotik 2014; 41st International Symposium on Robotics, 2014.
[8] V. Vishal, S. Gangopadhyay and D. Vivek, "CareBot: The automated caretaker system", 2017 International Conference On Smart Technologies For Smart Nation (SmartTechCon), 2017.
[9] 高于涵(2015),友善的家庭陪伴型機器人。國立中央大學碩士論文。
[10] Y. Yu, Y. Ting and N. Mayer, "A new paradigm of ubiquitous home care robot using Nao and Choregraphe", 2016 International Conference on Advanced Robotics and Intelligent Systems (ARIS), 2016.。
[11] A. Pandey and R. Gelin, "A Mass-Produced Sociable Humanoid Robot: Pepper: The First Machine of Its Kind", IEEE Robotics & Automation Magazine ( Volume: PP, Issue: 99 ), 2018.
[12] Y. Tang, Deep learning using linear support vector machines, 2013.
[13] C. Huang, "Combining convolutional neural networks for emotion recognition", 2017 IEEE MIT Undergraduate Research Technology Conference (URTC), 2017.
[14] I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.-H. Lee et al., "Challenges in representation learning: A report on three machine learning contests" in Neural information processing, Springer, pp. 117-124, 2013.
[15] K. He, X. Zhang and S. Ren, "Deep Residual Learning for Image Recognition", 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[16] R. Prasad, V. Rozgic, S. Vitaladevuni, "Robust eeg emotion classification using segment level decision fusion", IEEE International Conference on Acoustics Speech and Signal Processing, pp. 1286-1290, 2013
[17] S. Salari, A. Ansarian and H. Atrianfar, "Robust emotion classification using neural network models", 2018 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), 2018.
[18] G. Hinton, T. Tieleman, "Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude", COURSERA: Neural Networks for Machine Learning, vol. 4, 2012.
[19] V. Tümen, Ö. Faruk Söylemez and B. Ergen, "Facial emotion recognition on a dataset using convolutional neural network", 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), 2017.
[20] M. Liu, S. Li, S. Shan, R. Wang, X. Chen, "Deeply learning deformable facial action parts model for dynamic expression analysis", Computer Vision-ACCV 2014, pp. 143-157, 2014.
[21] P. Lucey, J. Cohn and T. Kanade, "The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression", 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010.
[22] M.F. Valstar, M. Pantic, "Induced Disgust Happiness and Surprise: An Addition to the MMI Facial Expression Database", Proc. Int'l Conf. Language Resources and Evaluation Workshop EMOTION, pp. 65-70, 2010-May.

[23] M. Valstar, M. Mehu and B. Jiang, "Meta-Analysis of the First Facial Expression Recognition Challenge", IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) ( Volume: 42, Issue: 4, Aug. 2012 ), 2012.
[24] B. Fasel, "Head-pose invariant facial expression recognition using convolutional neural networks", Multimodal Interfaces 2002. Proceedings. Fourth IEEE International Conference on, pp. 529-534, 2002.
[25] Z. Yu, C. Zhang, "Image based static facial expression recognition with multiple deep network learning", Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 435-442, 2015, November.
[26] S. E. Kahou, C. Pal, X. Bouthillier, P. Froumenty, Glehre, R. Memisevic, M. Mirza, "Combining modality specific deep neural networks for emotion recognition in video", Proceedings of the 15th ACM on International conference on multimodal interaction, pp. 543-550, 2013, December.
[27] A. Krizhevsky, I. Sutskever, G.E. Hinton, “Imagenet classification with deep convolutional neural networks,” In Advances in Neural Information Processing Systems 25, pages 1106–1114, 2012.
[28] K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, abs/1409.1556, 2014.
[29] C. Szegedy, W. Liu and Y. Jia, "Going deeper with convolutions", 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[30] http://ww1.microchip.com/downloads/en/devicedoc/70135C.pdf, November, 2017
[31] https://hobbytronics.com.pk/product/lm2596-adjustable-dc-dc-step-down-power-supply-module/, November, 2017
[32] https://www.taiwaniot.com.tw/product/ga12-n20-微型金屬減速馬達/,2018
[33] https://www.pololu.com/product/1595,2018
[34] http://akizukidenshi.com/catalog/g/gI-02001, November, 2017
[35] https://uge-one.com/pc817-optocoupler-optoisolator-dip-ic.html, November, 2017
[36] https://www.mi.com/tw/littleaudio, 2018
[37] https://www.logitech.com/zh-tw/product/hd-webcam-c525, 2018
[38] https://www.icshop.com.tw/product_info.php/products_id/23844, 2018
[39] C. Yuvaraj, M. Srikanth and V. Kumar, "An approach to maintain attendance using image processing techniques", 2017 Tenth International Conference on Contemporary Computing (IC3), 2018.
[40] https://www.pws.stu.edu.tw/shchen/Handout/Ch3%20Object%20Recognition.pdf, 2016
[41] P. Viola, M. Jones, "Rapid object detection using a boosted cascade of simple features", Computer Vision and Pattern Recognition 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1, pp. I-I, 2001.
[42] https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data, 2013
[43] Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, "Gradient-based learning applied to document recognition", Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[44] Michael A Nielsen, Neural networks and deep learning, 2015
[45] https://dotblogs.com.tw/greengem/2017/12/17/094150, December.2017
[46] Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Gang Wang, "Recent advances in convolutional neural networks", arXiv preprint, 2015.
[47] http://www.dayexie.com/detail1598911.html, April.2018
[48] M. Wang, B. Liu and H. Foroosh, "Look-Up Table Unit Activation Function for Deep Convolutional Neural Networks", 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 2018.
[49] 高健恩(2017) ,以FPGA實現卷積神經網路應用於影像除霧系統。國立成功大學碩士論文。
[50] 鄭侑廷(2017) ,使用全卷積神經網路應用於肝臟及其病變圖像分割。國立高雄大學碩士論文。
[51] O. Arriaga, P. Ploger and M. Valdenegro, "Real-time Convolutional Neural Networks for Emotion and Gender Classification", Arxiv.org, 2017. [Online]. Available: https://arxiv.org/pdf/1710.07557.pdf. [Accessed: 06- Jul- 2018].
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2021-01-27起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2021-01-27起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw