進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-1902202000034200
論文名稱(中文) 眼動資料提取方法之差異及機器學習模型差異分析-應用於泛自閉症兒童偵測
論文名稱(英文) Differences in eye movement data extraction methods and machine learning models-Applied to the detection of children with autism
校院名稱 成功大學
系所名稱(中) 心理學系
系所名稱(英) Department of Psychology
學年度 108
學期 1
出版年 109
研究生(中文) 黃得恩
研究生(英文) Te-En Huang
電子信箱 qaz910048@gmail.com
學號 U76074059
學位類別 碩士
語文別 中文
論文頁數 74頁
口試委員 指導教授-蕭富仁
指導教授-胡中凡
口試委員-張道行
口試委員-楊立行
中文關鍵字 自閉症  人工智慧  機器學習  眼球追蹤  深度神經網絡  支援向量機  K-means 
英文關鍵字 Autism  Machine Learning  Artificial Intelligence  Eye-tracking  Deep Learning  Support Vector Machine  K-means 
學科別分類
中文摘要 在先前的研究中發現自閉症兒童有著與其他正常小孩不同的眼球追蹤模式(Sasson & Elison, 2012),藉由這些眼球追蹤模式的探討,我們可以研判某些孩童可能會患有自閉症相關症狀的傾向。如果能隨著自閉症孩童的症狀進行較為早期的偵測,亦可有效的改善自閉症孩童的情況,所以本研究便想使用眼球追蹤裝置來掌握孩童的眼球運動反應模式,進而發展出能夠偵測自閉症傾向的自動判別系統。
自閉症患者對於陌生臉孔往往會避開臉部的主要部位,如:眼、口、鼻。過去自閉症相關研究為了較能夠有效的區辨出常童及自閉症之差異,大多使用本國人的陌生臉孔作為刺激材料,然而比起本國陌生臉,外國陌生臉對於孩童的陌生程度更高,用於正常及自閉症孩童的辨識上,可成為更有效的刺激材料。本研究想探討在觀看本國人臉及外國人陌生臉時,自閉症兒童與一般孩童於掃視型態上的差異,以改良現有用於自閉症辨識的刺激材料。
目前對於機器學習模型用於自閉症偵測上,各方都有提出一些預測模型的方法,但是尚未有一個基準能讓各種模型能夠做相對客觀的比較,在本研究我們比較臉孔差異,同時也比較不同的機器學習技術及結構上的差異,希望開發出能依照眼球運動狀態自動區分出自閉症和正常小孩的系統。
本研究採用了三種機器學習演算法當作基底,即K-means分類系統、支援向量機和深度神經網絡,前者主要用於區分眼球掃視的區塊,後兩者主要用於建構預測模型。首先,本研究的預測模型輸入項是由K-means的方式計算出自動化興趣區塊及人工勾勒的興趣區塊凝視總時間,再分別對於此兩種方式產生出的指標進行支援向量機與深度神經網絡的模型建構。在比較的同時,我們發現相對於人工興趣區塊,K-means的興趣區塊分類較能辨識出自閉症孩童以及正常小孩的差異,另外在支援向量機和深度神經網絡的比較當中,發現深度神經網絡具有較準確的區分效果,最後我們比較了不同刺激材料上的差異,發現本國人臉在以人工興趣區塊的方式進行模型建構時,效果較外國人臉好;在自動興趣區塊的方式上進行模型建構時,本國人臉及外國人臉都能達到相當的效果。但是在事後分析時,本研究發現將本國人臉及外國人臉的輸入項整合後進行模型建構,對於單獨辨識自閉症及辨識正常孩童上,都優於只選擇其一種刺激材料,也代表將刺激材料合併下,確實有益於自閉症辨識。因此本研究對於檢測和診斷精神障礙的狀況下如何選擇最佳機器學習模型提供了參考。
對於將來的研究,希望此篇研究能在臨床疾病診斷上作為預測模型的依據,並且能將使用機器學習的相關技術來建構評估患者嚴重程度的系統,以達到早期發現、早期治療的效果。
英文摘要 The detection of children with autism spectrum disorders (ASD) have always been a difficult problem. For children with ASD, it is hard to pay attention to others as well as to interact with people in society (Myles, Brenda & Simpson, 2001). If we do not concern those children with ASD, they will have difficulties making social connection throughout their lives. In order to reduce the behavioral difference in children with autism, we need some ways to detect ASD more easily first.

To distinguish between children with ASD and typically developed children, we used an eye-tracker to record the eye-movement of the children participants. An eye-tracker is a device that tracks the center of visual field of a person. By using this device, the researchers can know where a participant is focusing on. This device also provides some analysis details like the average of fixation durations, etc. Also, many of the previous studies about ASD used an eye-tracker for measurement since it is an easier tool to record and analyze children with ASD. So, eye-tracker is widely used for experiments concerning ASD. If you want to analyze the attention of people, it is a good way to use an eye-tracker as the measuring device.

The eye tracking data can help us detect ASD, because the visual attentional pattern of children with ASD is different from that of normal children. Most of the children with ASD exhibit abnormal eye-tracking patterns when looking at various people and objects (Sasson & Elison, 2012), especially when looking at strangers. This is because while recognizing a less familiar human, children with ASD tend to avoid the important parts of the face, such as eyes, nose, mouth, etc. Some research also used the unfamiliar faces as the stimuli for experiment and obtained several good findings. Based on those findings, we used unfamiliar faces as stimuli in our experiment as well.

We aim to develop a tool which can detect children with ASD more easily and quickly, so that maybe by assigning homework for them to improve their behavior, we can help them earlier. In order to further improve the detection of ASD, we used pictures of both native and foreign faces. Some research pointed out that while recognizing native faces and foreign faces, the eye-movement of children with ASD is different from that of normal children (Wilson, Palermo, Burton & Brock, 2011). This implies that using foreign faces as stimuli might increase the difference of eye-movement between children with ASD and normal children, and therefore reduce the ambiguity and improve the accuracy of the detection. Hence in this study, we compared the results of the two conditions, using either native faces or foreign faces as the stimuli.

For the detection of autism, previous research has proposed several prediction models (Liu, Li & Yi, 2016; Tyagi, Mishra & Bajpai, 2018) with different characteristics. Among those models, most studies used SVM or DNN models to predict ASD. However, there is no benchmark that allows us to have a fare comparison with those models. Although Tyagi, Mishra & Bajpai (2018) made a comparison between different predict models to predict adult autism, they did not separate the eye-tracker variables and questionnaire variables, so it was hard to tell which of the variables is the most important. Moreover, they only used adult participants in the research, so the results cannot be utilized in early detection and early treatment. Thus, in this research, we also compared the methods and structures of different machine learning models.

Before analyzing the eye-movement recorded by the eye-tracker, the researchers need to select their areas of interest (AOI) on the stimuli. The eye-tracker research in the past used human area of interest (human AOI) to select the important areas in the visual stimuli. Liu, Li & Yi (2016) have used k-means algorithm to select the important parts of the interest (Auto AOI) and obtained more than 80% of accuracy. However, they did not have a comparison between human AOI & Auto AOI, so it is unclear which AOI selection method is the best way for detecting children with ASD.

We hope our research can be used to develop a powerful system that can automatically distinguish between children with ASD and normal children based on the pattern of eye-movement. To achieve our goals, we divided our research into three sections. First, we compared the results of ASD detection of different machine learning models, namely support vector machine (SVM) and deep neural network (DNN). Second, we examined whether the stimuli of faces from different countries help our model to detect autism more accurately or not. Third, we compared the difference between Human AOI and Auto AOI, which was generated by k-means algorithm.
The results of the research can provide not only a system to detect ASD easier and more accurately, but also an advance of mental disorder diagnosis. For future research, we hope that our research can be used as a basis for diagnosis of any clinical disease. Furthermore, we hope our study can connect machine learning technology with clinical disease, in order to achieve early detection and early treatment.
論文目次 第一章 緒論 1
第一節 自閉症介紹 1
自閉症與正常人之行為差異 2
第二節 眼球運動分析儀運用於ASD偵測之研究 3
一、眼動儀簡介 3
二、早期分析眼動儀及ASD之間關連性的方法 4
第三節 機器學習、眼動儀、自閉症之關係 5
一、機器學習簡介 5
二、機器學習模型運用於自閉症偵測 9
第四節 研究問題及研究假設 10
第二章 研究一: 以人工AOI及DNN預測是否有自閉症 12
第一節 實驗目的、推論與假設 12
第二節 實驗方法 12
一、受試者 12
二、研究材料 13
三、研究設備 13
四、研究設計 14
五、眼動指標 14
六、研究程序 15
第三節 結果 16
一、 不同刺激材料之比較 16
二、 不同刺激材料之組合比較 19
第四節 討論 23
第三章 研究二: 以人工AOI及SVM預測是否有自閉症 25
第一節 實驗目的、推論與假設 25
第二節 實驗方法 25
一、受試者、實驗材料、實驗設備、實驗設計、眼動指標皆同研究一 25
二、實驗程序 25
第三節 結果 26
一、 不同刺激材料之比較 26
二、 不同刺激材料之組合比較 28
第四節 討論 31
第四章 研究三: 以K-MEANS自動AOI及DNN預測是否有自閉症 32
第一節 實驗目的、推論與假設 32
第二節 實驗方法 32
一、受試者、實驗材料、實驗設備、實驗設計皆同研究一。 32
二、眼動指標 32
三、實驗程序 33
第三節 結果 34
一、 不同刺激材料之比較 34
二、 不同刺激材料之組合比較 37
第四節 討論 41
第五章 研究四: 以K-MEANS自動AOI及SVM預測是否有自閉症 42
第一節 實驗目的、推論與假設 42
第二節 實驗方法 42
一、受試者、實驗材料、實驗設備、實驗設計皆同研究一,眼動指標同研究三。 42
二、實驗程序 42
第三節 結果 43
一、 不同刺激材料之比較 43
二、 不同刺激材料之組合比較 45
第四節 討論 47
第六章 綜合討論 49
第一節 主要發現 49
第二節 結論 56
第三節 未來研究之方向建議 58
參考文獻 60
中文參考文獻: 60
英文參考文獻: 60
附錄 66
參考文獻 中文參考文獻:
王南凱、吳岱穎、鄒國蘇、黃宜靜、郭冠良、吳逸帆、陳建志. (2013). 淺談自閉症類群障礙. 北市醫學雜誌, 10(3), 173–181. https://doi.org/10.6200/TCMJ.2013.10.3.01
唐大崙、 張文瑜 (2007). 利用眼動追蹤法探索傳播研究. 中華傳播學刊, (12), 165–211. https://doi.org/10.6195/cjcr.2007.12.05
鳳華 (2001)。中部地區自閉症者心智理論之發展現況及教學成效結案報告。
鳳華、周婉琪丶孫文菊丶蔡馨惠(2014)。自閉症兒童社會情緒教育實務工作手冊。台北:心理出版社。
余勝皓、 陳學志、林慧麗(2018).。以眼動儀探討罹患自閉症類群障礙症之兒童對自然情境圖片中社會訊息之凝視型態:ASD自然情境圖片眼動研究. 特殊教育研究學刊, 43(2), 65–92. https://doi.org/10.6172/BSE.201807_43(2).0003


英文參考文獻:
Ashwin, C., Baron-Cohen, S., Wheelwright, S., O’Riordan, M., &Bullmore, E. T. (2007). Differential activation of the amygdala and the ‘social brain’ during fearful face-processing in Asperger Syndrome. Neuropsychologia, 45(1), 2–14. https://doi.org/10.1016/J.NEUROPSYCHOLOGIA.2006.04.014
Ashwin, C., Chapman, E., Colle, L., & Baron-Cohen, S. (2006). Impaired recognition of negative basic emotions in autism: a test of the amygdala theory. Social Neuroscience, 1(3–4), 349–363. https://doi.org/10.1080/17470910601040772
Baron-cohen, S., Riordan, M. O., Stone, V., Jones, R., & Plaisted, K. (1999). A new test of social sensitivity : Detection of faux pas in normal children and children with Asperger syndrome : Journal of Autism and Developmental Disorders, 29, 407–418.
BATTY, M., & TAYLOR, M. J. (2002). Visual categorization during childhood: An ERP study. Psychophysiology, 39(4), S0048577202010764. https://doi.org/10.1017/S0048577202010764
Batty, M., & Taylor, M. J. (2006). The development of emotional face processing during childhood. Developmental Science, 9(2), 207–220. https://doi.org/10.1111/j.1467-7687.2006.00480.x
Boyle, C. L., & Lutzker, J. R. (2005). Teaching young children to discriminate abusive from nonabusive situations using multiple exemplars in a modified discrete trial teaching format. Journal of Family Violence, 20(2), 55–69. https://doi.org/10.1007/s10896-005-3169-4
Casanova, M. F. (2007). The neuropathology of autism. Brain Pathology, 17(4), 422–433. https://doi.org/10.1111/j.1750-3639.2007.00100.x
Celani, G., Battacchi, M. W., & Arcidiacono, L. (1999). The Understanding of the Emotional Meaning of Facial Expressions in People with Autism. Journal of Autism and Developmental Disorders, 29(1), 57–66. https://doi.org/10.1023/A:1025970600181
Chang, C. C., & Lin, C. J. (2011). LIBSVM: A Library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3). https://doi.org/10.1145/1961189.1961199
Chua, H. F., Boland, J. E., & Nisbett, R. E. (2005). Cultural variation in eye movements during scene perception. Proceedings of the National Academy of Sciences of the United States of America, 102(35), 12629–12633. https://doi.org/10.1073/pnas.0506162102
Dawson, G., Meltzoff, A. N., Osterling, J., Rinaldi, J., & Brown, E. (1998). Children with Autism Fail to Orient to Naturally Occurring Social Stimuli. Journal of Autism and Developmental Disorders, 28(6), 479–485. https://doi.org/10.1023/A:1026043926488
DiLollo, V., Kawahara, J. I., Zuvic, S. M., & Visser, T. A. W. (2001). The preattentive emperor has no clothes: A dynamic redressing. Journal of Experimental Psychology: General, 130(3), 479–492. https://doi.org/10.1037/0096-3445.130.3.479
Downs, A., & Downs, R. C. (2013). Training new instructors to implement discrete trial teaching strategies with children with autism in a community-based intervention program. Focus on Autism and Other Developmental Disabilities, 28(4), 212–221. https://doi.org/10.1177/1088357612465120
Feng, H., Lo, Y., Tsai, S., & Cartledge, G. (2008). The Effects of Theory-of-Mind and Social Skill Training on the Social Competence of a Sixth-Grade Student With Autism. Journal of Positive Behavior Interventions, 10(4), 228–242. https://doi.org/10.1177/1098300708319906
Fletcher-Watson, S., Leekam, S. R., Benson, V., Frank, M. C., & Findlay, J. M. (2009). Eye-movements reveal attention to social information in autism spectrum disorder. Neuropsychologia, 47(1), 248–257. https://doi.org/10.1016/J.NEUROPSYCHOLOGIA.2008.07.016
Gillberg, C. (1998). Asperger syndrome and high-functioning autism. British Journal of Psychiatry, 172(3), 200–209. https://doi.org/10.1192/bjp.172.3.200
Golan, O., Baron-Cohen, S., & Golan, Y. (2008). The ‘Reading the Mind in Films’ Task [Child Version]: Complex Emotion and Mental State Recognition in Children with and without Autism Spectrum Conditions. Journal of Autism and Developmental Disorders, 38(8), 1534–1541. https://doi.org/10.1007/s10803-007-0533-7
Gosselin, F., & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41(17), 2261–2271. https://doi.org/10.1016/S0042-6989(01)00097-9
Hall, S. S., Hustyi, K. M., Hammond, J. L., Hirt, M., & Reiss, A. L. (2014). Using Discrete Trial Training to Identify Specific Learning Impairments in Boys with Fragile X Syndrome. Journal of Autism and Developmental Disorders, 44(7), 1659–1670. https://doi.org/10.1007/s10803-014-2037-6
Harms, M. B., Martin, A., & Wallace, G. L. (2010). Facial Emotion Recognition in Autism Spectrum Disorders: A Review of Behavioral and Neuroimaging Studies. Neuropsychology Review, 20(3), 290–322. https://doi.org/10.1007/s11065-010-9138-6
Hsu, H.-Y., &Chien, S. H.-L. (2011). Exploring the other-race effect in Taiwanese infants and adults. [Exploring the other-race effect in Taiwanese infants and adults.]. Chinese Journal of Psychology, 53(1), 35–57.
Huang, J., Shao, X., & Wechsler, H. (2002). Face pose discrimination using support vector machines (SVM). 154–156. https://doi.org/10.1109/icpr.1998.711102
Jiang, M., & Zhao, Q. (n.d.). Learning Visual Attention to Identify People with Autism Spectrum Disorder. 3267–3276.
Ketkar, N., & Ketkar, N. (2017). Introduction to Keras. In Deep Learning with Python. https://doi.org/10.1007/978-1-4842-2766-4_7
Klin, A., Jones, W., Schultz, R., Volkmar, F., & Cohen, D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59(9), 809–816. https://doi.org/10.1001/archpsyc.59.9.809
Kocsis, R. N. (2013). Book Review: Diagnostic and Statistical Manual of Mental Disorders: Fifth Edition (DSM-5). International Journal of Offender Therapy and Comparative Criminology, 57(12), 1546–1548. https://doi.org/10.1177/0306624X13511040
Kodinariya, T. M., & Makwana, P. R. (2013). Review on determining number of Cluster in K-Means Clustering. International Journal of Advance Research in Computer Science and Management Studies, 1(6), 2321–7782.
Li, M.-A., & Tsai, C.-H. (2016). Text Categorization for Chinese News : A Comparative Study Text Categorization for Chinese News : A Comparative Study. (June).
Lindner, J. L., & Rosén, L. A. (2006). Decoding of Emotion through Facial Expression, Prosody and Verbal Content in Children and Adolescents with Asperger’s Syndrome. Journal of Autism and Developmental Disorders, 36(6), 769–777. https://doi.org/10.1007/s10803-006-0105-2
Liu, W., Li, M., & Yi, L. (2016). Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework. Autism Research, 9(8), 888–898. https://doi.org/10.1002/aur.1615
Liu, W., Yi, L., Yu, Z., Zou, X., Raj, B., & Li, M. (2015). Efficient autism spectrum disorder prediction with eye movement: A machine learning framework. 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015, 649–655. https://doi.org/10.1109/ACII.2015.7344638
Loughland, C. M., Williams, L. M., & Gordon, E. (2002). Schizophrenia and affective disorder show different visual scanning behavior for faces: a trait versus state-based distinction? Biological Psychiatry, 52(4), 338–348. https://doi.org/10.1016/S0006-3223(02)01356-2
Mundy, P., & Rebecca Neal, A. (2000). Neural plasticity, joint attention, and a transactional social-orienting model of autism. International Review of Research in Mental Retardation, 23, 139–168. https://doi.org/10.1016/S0074-7750(00)80009-9
Myles, Brenda; Simpson, R. (2001). Focus on Exceptional Children: Effective Practices for Students with Asperger Syndrome.
Njiokiktjien, C., Verschoor, A., deSonneville, L., Huyser, C., Op het Veld, V., & Toorenaar, N. (2001). Disordered recognition of facial identity and emotions in three Asperger type autists. European Child & Adolescent Psychiatry, 10(1), 79–90. https://doi.org/10.1007/s007870170050
Noton, D., & Stark, L. (1971). Scanpaths in Eye Movements during Pattern Perception. Science, 171(3968), 308 LP – 311. https://doi.org/10.1126/science.171.3968.308
Pelphrey, K. A., Sasson, N. J., Reznick, J. S., Paul, G., Goldman, B. D., & Piven, J. (2002). Visual Scanning of Faces in Autism. Journal of Autism and Developmental Disorders, 32(4), 249–261. https://doi.org/10.1023/A:1016374617369
Rutherford, M. D., & McIntosh, D. N. (2007). Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum. Journal of Autism and Developmental Disorders, 37(2), 187–196. https://doi.org/10.1007/s10803-006-0151-9
Sasson, N. J., & Elison, J. T. (2012). Eye tracking young children with autism. Journal of Visualized Experiments : JoVE, (61), 1–5. https://doi.org/10.3791/3675
Simpson, R. L. (2005). Evidence-Based Practices and Students With Autism Spectrum Disorders. Focus on Autism and Other Developmental Disabilities, 20(3), 140–149. https://doi.org/10.1177/10883576050200030201
Smith, T. (2001). Discrete Trial Training in the Treatment of Autism. Focus on Autism and Other Developmental Disabilities, 16(2), 86–92. https://doi.org/10.1177/108835760101600204
Spezio, M. L., Adolphs, R., Hurley, R. S. E., & Piven, J. (2007). Abnormal Use of Facial Information in High-Functioning Autism. Journal of Autism and Developmental Disorders, 37(5), 929–939. https://doi.org/10.1007/s10803-006-0232-9
Suykens, J. A. K., & Vandewalle, J. (1999). Least Squares Support Vector Machine Classifiers. Neural Processing Letters, 9(3), 293–300. https://doi.org/10.1023/A:1018628609742
Thomas, L. A., DeBellis, M. D., Graham, R., & LaBar, K. S. (2007). Development of emotional facial recognition in late childhood and adolescence: REPORT. Developmental Science, 10(5), 547–558. https://doi.org/10.1111/j.1467-7687.2007.00614.x
Tyagi, B., Mishra, R., & Bajpai, N. (2018). Machine Learning Techniques to Predict Autism Spectrum Disorder. 1st International Conference on Data Science and Analytics, PuneCon 2018 - Proceedings. https://doi.org/10.1109/PUNECON.2018.8745405
Varoquaux, G., Buitinck, L., Louppe, G., Grisel, O., Pedregosa, F., & Mueller, A. (2015). Scikit-learn. GetMobile: Mobile Computing and Communications, 19(1), 29–33. https://doi.org/10.1145/2786984.2786995
Villalba, J., Miguel, A., Ortega, A., & Lleida, E. (2015). Spoofing detection with DNN and one-class SVM for the ASVspoof 2015 challenge. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, 2015-Janua(1), 2067–2071.
Wilson, C. E., Palermo, R., & Brock, J. (2012). Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development. PLoS ONE, 7(5), 1–9. https://doi.org/10.1371/journal.pone.0037681
Wilson, C. E., Palermo, R., Burton, A. M., & Brock, J. (2011). Recognition of own- and other-race faces in autism spectrum disorders. Quarterly Journal of Experimental Psychology, 64(10), 1939–1954. https://doi.org/10.1080/17470218.2011.603052
Yi, L., Quinn, P. C., Fan, Y., Huang, D., Feng, C., Joseph, L. & Lee, K. (2016). Children with Autism Spectrum Disorder scan own-race faces differently from other-race faces. Journal of Experimental Child Psychology, 141, 177–186. https://doi.org/10.1016/j.jecp.2015.09.011
Zihl, J., VonCramon, D., & Mai, N. (1983). Selective disturbance of movement vision after bilateral brain damage. Brain, 106(2), 313–340. https://doi.org/10.1093/brain/106.2.313
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2025-01-01起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2025-01-01起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw