進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-1508201916324300
論文名稱(中文) 基於多字節共現注意力機制之反諷偵測
論文名稱(英文) Co-Attention Mechanism Based On Multi-gram for sarcasm detection
校院名稱 成功大學
系所名稱(中) 資訊工程學系
系所名稱(英) Institute of Computer Science and Information Engineering
學年度 107
學期 2
出版年 108
研究生(中文) 吳昊
研究生(英文) Hao Wu
電子信箱 wh345736500@gmail.com
學號 P76063029
學位類別 碩士
語文別 英文
論文頁數 38頁
口試委員 指導教授-高宏宇
口試委員-李政德
口試委員-莊坤達
口試委員-黃俊龍
中文關鍵字 情緒分析  注意力網路  卷積神經網路 
英文關鍵字 Emotion analysis  attention network  convolutional neural network 
學科別分類
中文摘要 諷刺是社交媒體中普遍存在的現象,是用一種“不協調”的方式描述了自己當前的處境或情緒的高級語言表示。諷刺偵測是一個非常新的任務,一個當情緒分析,意見挖掘發展到一定程度而衍生的子任務,因為存在諷刺的句子在這兩個任務中常常被機器錯誤的分類,所以諷刺偵測是一個必要且具有挑戰性的任務。傳統的機器學習方法建模效率低下,隨著近兩年深度學習的發展,讓這個任務取得了不小的突破。
本篇論文我們提出了一種MCCO的方法,是一個專門針對句子中的情緒或情境之間的不協調的模型,我們使用卷積神經網路抽取句子中完整的情境或情緒信息,用一種跨濾波器的最大池化方法來減小參數,注意力網路讓機器更關注於句子中不協調的部分,將不會遺忘前後文信息的注意力網路和卷積神經網路結合能更深度的學習到句子里所有不協調的完整情緒或情境,當這種不協調程度足夠明顯時機器將會判定這個句子為諷刺。
我們對來自紅迪,互聯網論證語料庫,推特的七個資料集進行實驗。我們的方法在其中的四個子資料集取得了最先進的性能。在實驗的最後我們還用注意力視覺化人為的解釋了我們的模型對於判斷諷刺更準確,更靈敏。
英文摘要 Sarcasm is a pervasive phenomenon in social media, is a high-level linguistic representation of the current situation or mood in an ‘incongruity’ manner. Sarcasm detection is a very new task, which is a sub-task derived from sentiment analysis and opinion mining to a certain extent. Because the sarcasm sentences are often classified errors in these two tasks. Therefore, sarcasm detection is a necessary and challenging task. The traditional machine learning method is inefficient, and with the development of deep learning in the past two years, this task has made a breakthrough.
In this paper, we propose an “MCCO” method, which is specific to the incongruity between emotions or situations in a sentence. We use a convolution neural network to extract complete situational or emotional information from sentences and use max over different filter pooling to reduce parameters. The attention network can focus more on the incongruity parts of the sentence. Combine the attention network that will not forget the context information and convolutional neural network can learn more deeply all the incongruity complete emotions or situations in the sentence.
We experimented with seven datasets from Reddit, Internet Argument Corpus (IAC), Twitter. Our approach got the state-of-the-art in four of the sub-datasets. At the end of the experiment, we used visual attention to explain our model to be more accurate and sensitive to the determination of sarcasm.
論文目次 中文摘要 III
ABSTRACT IV
Figure LISTING VII
TABLE LISTING VIII
INTRODUCTION 1
1.1. Background 1
1.2. Motivation 4
1.3. Approach 6
1.4. Paper structure 6
2. RELATED WORK 8
2.1 Co-attention neural network 8
2.2 Convolution neural network 12
3. METHOD 15
3.1 Pre-processing 15
3.2 Input layer 16
3.3 Convolution layer 17
3.4 Co-attention layer 18
4. EXPERIMENTS AND RESULTS 21
4.1 Dataset Description 21
4.2 Benchmark 23
4.3 Result 25
4.4 Visual Analysis 29
4.5 Time Analysis 33
5. CONCLUSIONS 35
REFERENCES 36
參考文獻 [1] Tao Xiong, Peiran Zhang, Hongbo Zhu, Yihui Yang.2019. Sarcasm Detection with Self-matching Networks and Low-rank Bilinear Pooling. dl.acm.org/citation.cfm? id=3313735
[2] Yi Tay, Luu Anh Tuan, Siu Cheung Hui, and Jian Su. 2018. Reasoning with Sarcasm by Reading In-between. arXiv preprint arXiv:1805.02856 (2018).
[3] Aniruddha Ghosh and Tony Veale. 2016. Fracking sarcasm using neural network. Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media by the North American Chapter of the Association for Compuattional Linguistics (WASSA NAACL’16). [doi>10.18653/v1/W16-0425]
[4] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems. 289–297. 24 Stephanie Lukin and Marilyn Walker. 201
[5] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1480–1489
[6] Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. 704–714.
[7] Tomáš Ptáček, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on czech and english twitter. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. 213–223.
[8] Christos Baziotis, Nikos Pelekis, and Christos Doulkeridis. 2017. Datastories at semeval-2017 task 4: Deep lstm with attention for message-level and topic-based sentiment analysis. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 747-754, Vancouver, Canada, August. Association for Computational Linguistics.
[9] Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2017. A large selfannotated corpus for sarcasm. arXiv preprint arXiv:1704.05579 (2017).
[10] Marilyn A Walker, Jean E Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A Corpus for Research on Deliberation and Debate.. In LREC. 812–817.
[11] Shamay-Tsoory, S. G., Tomer, R., Berger, B. D., Goldsher, D. and Aharon-Peretz, J. 2005. Impaired “affective theory of mind” is associated with right ventromedial prefrontal damage. Cognitive and Behavioral Neurology, 18: 55–67. [Crossref], [PubMed], [Web of Science ®], , [Google Scholar].
[12] Anupam Khattri, Aditya Joshi, Pushpak Bhattacharyya, and Mark James Carman. 2015. Your sentiment precedes you: Using an authors historical tweets to predict sarcasm. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA’15). 25.
[13] Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017. pages 482–491.
[14] J. Pennington, R. Socher, and C. D. Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
[15] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
[16] Le Hoang Son , Akshi Kumar , Saurabh Raj Sangwan , Anshika Arora,Anand Nayyar , Mihamed Abdel-Basset.2019. Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network. ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8641269.
[17] Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2020-05-23起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw