系統識別號 U0026-2807201923330200
論文名稱(中文) 應用Faster R-CNN於行車紀錄器實現車禍辨識系統
論文名稱(英文) Implement Car Accident Detection System on Dashboard Camera by Faster R-CNN
校院名稱 成功大學
系所名稱(中) 資訊管理研究所
系所名稱(英) Institute of Information Management
學年度 107
學期 2
出版年 108
研究生(中文) 許晏綸
研究生(英文) Yen-Lun Hsu
學號 R76064051
學位類別 碩士
語文別 中文
論文頁數 57頁
口試委員 指導教授-劉任修
中文關鍵字 Faster R-CNN  深度學習  車禍  物體辨識 
英文關鍵字 Faster R-CNN  Deep learning  Object Detection  Car Accident 
中文摘要 汽機車的發明,縮短了世界上的距離,但是也帶來相當的危險性,車禍的發生會造成巨大的社會成本以及人民生命財產的危機,因此近些年來,車禍相關的研究逐漸增加。台灣地小人稠,車禍的傷亡度亦為世界上最高的國家之一,因此在處理車禍相關問題勢在必行。大多數相關文獻多為處理車上傳感器的資訊,分析是否發生車禍,但是其無法判斷車禍的肇事責任,因此我們使用深度學習物體辨識方法Faster Region-based Convolutional Neural Network(Faster R-CNN)進行即時的車禍辨識,並且將其實作於Android系統上。
首先我們在電腦上訓練Faster R-CNN模型,接著將訓練好的模型轉為tflite型態以便Android系統能夠使用。然後透過Andorid Studio編寫應用程式後,再將用電腦訓練好的模型輸入至Android應用程式當中,使應用程式能夠在攝影的同時進行車禍判斷。
英文摘要 The invention of the vehicle shortens the distance between people, but it brings considerable danger. Car accidents will cause huge social costs and harm people's lives and property. Therefore, in recent years, research on car accidents has gradually increased. Taiwan is a densely populated country, and most people use motorcycles as their means of transportation. Thus, Taiwan's car accident casualty rate is one of the highest in the world. It is important to deal with problems related to car accidents in Taiwan. Most of the relevant literature, which analyzes the sensor information on the vehicle to determine whether there is a car accident, cannot preserve the circumstances, so it is impossible to judge the traffic accident responsibility. We use the deep learning object detection method, Faster Region-based Convolutional Neural Network(Faster R-CNN), to instantly identify car accidents. And we implement it on the Android system. We write apps through Android Studio, which can judge car accidents while photographing.
論文目次 摘要 i
誌謝 ix
目錄 x
表目錄 xiii
圖目錄 xiv
1 緒論 1
1.1 背景及動機 1
1.2 研究目的 3
1.3 研究貢獻 3
1.4 研究架構 4
2 相關文獻探討 5
2.1 車禍偵測方法 5
2.2 物體辨識方法 7
2.2.1 單級網路 8
2.2.2 二級網路 9
2.3 相關資料庫 10
2.4 小結 11
3 研究方法 12
3.1 特徵提取網路 15
3.1.1 卷積層 15
3.1.2 池化層 17
3.2 候選區域網路 19
3.2.1 錨點 20
3.2.2 RPN分類層 20
3.2.3 RPN迴歸層 21
3.2.4 訓練候選區域網路 22
3.3 候選區域池化與物體辨識 24
3.3.1 候選區域池化 25
3.3.2 分類層 26
3.3.3 迴歸層 27
3.3.4 訓練卷積神經網路 27
3.3.5 訓練Faster R-CNN 29
3.4 車禍辨識系統 30
3.4.1 使用者案例圖 30
3.4.2 狀態圖 30
3.4.3 類別圖 32
3.4.4 活動圖 36
3.4.5 循序圖 38
3.4.6 應用程式 40
4 實驗與分析 41
4.1 資料集 41
4.2 實驗結果與分析 45
4.2.1 衡量指標 45
4.2.2 實驗環境與參數設定 47
4.2.3 實驗結果與分析 48
5 結論與未來發展 53
參考文獻 54
參考文獻 Chan, F.-H., Chen, Y.-T., Xiang, Y., and Sun, M. (2016). Anticipating accidents in dashcam videos. In Asian Conference on Computer Vision, pages 136–153. Springer.
Chang, L.-Y. (2005). Analysis of freeway accident frequencies: negative binomial re- gression versus artificial neural network. Safety science, 43(8):541–557.
Chiou, Y.-C. (2006). An artificial neural network-based expert system for the appraisal of two-car crash accidents. Accident Analysis & Prevention, 38(4):777–785.
Conneau, A., Kiela, D., Schwenk, H., Barrault, L., and Bordes, A. (2017). Supervised learning of universal sentence representations from natural language inference data. CoRR, abs/1705.02364.
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pat- tern recognition, pages 3213–3223.
Dai, J., Li, Y., He, K., and Sun, J. (2016). R-fcn: Object detection via region-based fully convolutional networks. In Advances in neural information processing systems, pages 379–387.
Gavrila, D. M. and Philomin, V. (1999). Real-time object detection for” smart” vehi- cles. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 1, pages 87–93. IEEE.

Geiger, A., Lenz, P., and Urtasun, R. (2012). Are we ready for autonomous driving? the kitti vision benchmark suite. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3354–3361. IEEE.
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2016a). Region-based convolu- tional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1):142–158.
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2016b). Region-based convolu- tional networks for accurate object detection and segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(1):142–158.
Hinton, G. E. and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. (2016). Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer.
Parmar, P. and Sapkal, A. M. (2017). Real time detection and reporting of vehicle colli- sion. In Trends in Electronics and Informatics (ICEI), 2017 International Conference on, pages 1029–1034. IEEE.
Patel, M., Lal, S. K., Kavanagh, D., and Rossiter, P. (2011). Applying neural network analysis on heart rate variability data to assess driver fatigue. Expert systems with Applications, 38(6):7235–7242.

Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Uni- fied, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788.
Ren, S., He, K., Girshick, R., and Sun, J. (2017). Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6):1137–1149.
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, pages 65–386.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Parallel distributed process- ing: Explorations in the microstructure of cognition, vol. 1. chapter Learning Internal Representations by Error Propagation, pages 318–362. MIT Press, Cambridge, MA, USA.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550(7676):354.
Sohn, S. Y. and Shin, H. (2001). Pattern recognition for road traffic accident severity in korea. Ergonomics, 44(1):107–117.

White, J., Thompson, C., Turner, H., Dougherty, B., and Schmidt, D. C. (2011). Wreck- watch: Automatic traffic accident detection and notification with smartphones. Mo- bile Networks and Applications, 16(3):285–303.
Zaldivar, J., Calafate, C. T., Cano, J. C., and Manzoni, P. (2011). Providing accident de- tection in vehicular networks through obd-ii devices and android-based smartphones. In Local Computer Networks (LCN), 2011 IEEE 36th Conference on, pages 813–819. IEEE.
Zhang, Y., Chan, W., and Jaitly, N. (2017). Very deep convolutional networks for end- to-end speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4845–4849.
  • 同意授權校內瀏覽/列印電子全文服務,於2019-08-06起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2019-08-06起公開。

  • 如您有疑問,請聯絡圖書館