進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-0208202023514600
論文名稱(中文) 基於結合CNN與LSTM神經網路之車輛碰撞風險預測
論文名稱(英文) Risk Prediction of Vehicle Collision Based on A Combined Neural Network of CNN and LSTM
校院名稱 成功大學
系所名稱(中) 交通管理科學系
系所名稱(英) Department of Transportation & Communication Management Science
學年度 108
學期 2
出版年 109
研究生(中文) 謝辰陽
研究生(英文) Chen-Yang Hsieh
學號 R56071088
學位類別 碩士
語文別 英文
論文頁數 87頁
口試委員 指導教授-胡大瀛
口試委員-林佐鼎
口試委員-朱致遠
口試委員-董啟崇
口試委員-陳麗雯
中文關鍵字 車輛碰撞  自駕車  長短期記憶網路  卷積神經網路  圖像序列 
英文關鍵字 Vehicle Collision  Autonomous Vehicles  LSTM  CNN  Image Sequence 
學科別分類
中文摘要 根據內政部警政署的統計,中華民國臺灣在2018年發生了超過三十二萬起交通事故,造成將近一千五百人死亡。隨著自動駕駛汽車技術的發展,車輛可以藉由分析車輛上安裝的各種傳感器,如光學雷達、雷達和相機所收集的資料,評估道路安全風險並在適當的時機採取必要的預防措施。近年來有越來越多的人在自己的汽車上安裝行車紀錄器,這些行車記錄器不僅可以在交通事故發生後釐清相關的肇事責任,還可以在行駛過程中隨時監控周圍環境的變化,進而達到行車安全的目的。
本研究收集臺南市車輛行車事故鑑定委員會提供的車輛碰撞影片資料,包含由行車紀錄器及路側監視器錄製的影片,來模擬自動駕駛汽車的傳感器,藉此訓練車輛碰撞風險預測模型。本研究使用預訓練的卷積神經網路(CNN):ResNet-50網路,來擷取影片中每個幀的圖像特徵,以及擅長處理具時間序列特性資料的長短期記憶網路(LSTM),來擷取影片的時間特徵。並建構了五個基於上述CNN和LSTM的不同結構和輸入資料的模型。使用評估指標:F1分數來評估模型的效能。研究結果顯示,同時使用車輛動態特徵資料和影片資料的模型五得到0.94的F1-score,具有最佳的效能,並且能在碰撞發生前2.5到3.0秒時偵測到碰撞風險高於門檻值0.5。而對於只使用影片資料的模型三,其效能則得到0.83的F1-score,並且能在碰撞發生前3.0秒時偵測到碰撞風險高於閥值0.5。
英文摘要 According to the statistics from the National Police Agency, Ministry of the Interior, in 2018, there were 320,315 traffic accidents, including 1,493 deaths in Taiwan. With the development of autonomous vehicles (AV), vehicles can analyze the data captured by sensors equipped on them like LiDARs, radars, and cameras to assess the risk of road safety and take the necessary precautions. Currently, there are more and more people install a dashboard camera (dashcam) in their cars. The dashcam cannot only clarify the responsibility of a traffic accident but also can monitor the surrounding conditions at any time while driving, which can achieve the goal of road safety.
This study collected the video data of vehicle collision provided by the Tainan City Traffic Accident Investigation Committee, including the video recorded by dashcam or closed-circuit television (CCTV) to simulate the sensor of autonomous vehicles and train the vehicle collision risk prediction models. ResNet-50 network which is a kind of pre-trained convolutional neural network (CNN) is used to capture the image features of each frame in videos. Long short-term memory (LSTM) network is good at processing time-series data is used to capture the temporal features of videos. In this study, five models based on CNN and LSTM with different structures and input data are built. F1-score is used to evaluate the performance of models. The results show that the Model 5 using both vehicle dynamic feature data and video clips data gets a 0.94 F1-score has the best performance, and the collision risk can be detected to exceed the 0.5 threshold at 2.5 to 3.0 seconds before the collision occurred. For the models only use the video data, the performance of the Model 3 gets a 0.83 F1-score, and the collision risk can be detected to exceed the 0.5 threshold at 3.0 seconds before collision.
論文目次 ABSTRACT I
摘要 II
誌謝 III
CONTENTS IV
LIST OF TABLES VI
LIST OF FIGURES VIII
CHAPTER 1 INTRODUCTION 1
1.1 Research Background and Motivation 1
1.2 Research Objectives 2
1.3 Research Flow Chart 3
CHAPTER 2 LITERATURE REVIEW 6
2.1 Autonomous Vehicles 6
2.1.1 The Development of Autonomous Vehicles 7
2.1.2 Advanced Driver Assistance System (ADAS) 9
2.2 The Applications of Deep Learning in Traffic Accident Prevention 10
2.3 Deep Learning Approaches for Image Sequence Prediction 14
2.3.1 Long Short-Term Memory (LSTM) 14
2.3.2 CNN Long Short-Term Memory (CNN-LSTM) 15
2.4 Definition of the Vehicle Dynamic Features 16
2.5 Summary 18
CHAPTER 3 RESEARCH METHODOLOGY 19
3.1 Research Framework 19
3.2 Long Short-Term Memory (LSTM) 22
3.3 Convolutional Neural Network (CNN) 25
3.4 Selection of the Vehicle Dynamic Features 28
3.5 The Architecture of the Prediction Model 29
3.6 Evaluation Criteria 35
3.7 Software and Package 37
CHAPTER 4 EXPERIMENT SETUP 38
4.1 Data Collection 38
4.1.1 Vehicle Dynamic Features Data 40
4.1.2 Video Clips Data and Preprocessing 42
4.2 Model Building 46
4.2.1 The Hyperparameters of Models 47
4.2.2 Model 1 50
4.2.3 Model 2 52
4.2.4 Model 3 54
4.2.5 Model 4 56
4.2.6 Model 5 58
CHAPTER 5 EXPERIMENT RESULTS AND ANALYSIS 60
5.1 Classification Results and Analysis 60
5.2 Collision Risk Prediction 67
5.2.1 Model 3 69
5.2.2 Model 4 73
5.2.3 Model 5 77
5.3 Summary and Future Applications 80
CHAPTER 6 CONCLUSIONS AND SUGGESTION 81
6.1 Conclusions 81
6.2 Suggestions 82
REFERENCES 83
參考文獻 1. Aarts, L., & van Schagen, I. (2006). Driving speed and the risk of road crashes: A review. Accident Analysis & Prevention, 38(2), 215-224. doi:10.1016/j.aap.2005.07.004
2. Al Hajj, H., Lamard, M., Conze, P. H., Cochener, B., & Quellec, G. (2018). Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks. Med Image Anal, 47, 203-218. doi:10.1016/j.media.2018.05.001
3. Balderas, D., Ponce, P., & Molina, A. (2019). Convolutional long short term memory deep neural networks for image sequence prediction. Expert Systems with Applications, 122, 152-162. doi:10.1016/j.eswa.2018.12.055
4. Batista, G. E. A. P. A., Prati, R. C., & Monard, M. C. (2005). Balancing Strategies and Class Overlapping. Paper presented at the Advances in Intelligent Data Analysis VI, Berlin, Heidelberg.
5. Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157-166. doi:10.1109/72.279181
6. Bhunia, A. K., Konwer, A., Bhunia, A. K., Bhowmick, A., Roy, P. P., & Pal, U. (2019). Script identification in natural scene image and video frames using an attention based Convolutional-LSTM network. Pattern Recognition, 85, 172-184. doi:10.1016/j.patcog.2018.07.034
7. Byeon, W., Liwicki, M., & Breuel, T. M. (2015). Scene analysis by mid-level attribute learning using 2D LSTM networks and an application to web-image tagging. Pattern Recognition Letters, 63, 23-29. doi:10.1016/j.patrec.2015.06.003
8. Campbell, M., Egerstedt, M., How, J. P., & Murray, R. M. (2010). Autonomous driving in urban environments: approaches, lessons and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 368(1928), 4649-4672. doi:10.1098/rsta.2010.0110
9. Corcoran, G., & Clark, J. (2019). Traffic Risk Assessment: A Two-Stream Approach Using Dynamic-Attention. Paper presented at the 2019 16th Conference on Computer and Robot Vision (CRV).
10. Cui, C. (2017). Convolutional Polynomial Neural Network for Improved Face Recognition.
11. Donahue, J., Hendricks, L. A., Rohrbach, M., Venugopalan, S., Guadarrama, S., Saenko, K., & Darrell, T. (2017). Long-Term Recurrent Convolutional Networks for Visual Recognition and Description. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 39(4), 677-691. doi:10.1109/TPAMI.2016.2599174
12. Donges, N. (2019). GRADIENT DESCENT IN A NUTSHELL: A SIMPLE INTRO TO ONE OF THE MOST POPULAR ALGORITHMS AROUND. Retrieved from https://builtin.com/data-science/gradient-descent
13. Elvik, R., Vadeby, A., Hels, T., & van Schagen, I. (2019). Updated estimates of the relationship between speed and road safety at the aggregate and individual levels. Accid Anal Prev, 123, 114-122. doi:10.1016/j.aap.2018.11.014
14. Fagnant, D. J., & Kockelman, K. (2015). Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transportation Research Part A: Policy and Practice, 77, 167-181. doi:10.1016/j.tra.2015.04.003
15. Fraile, R., & Maybank, S. J. (1998). Vehicle Trajectory Approximation and Classification. Proceedings of the British Machine Vision Conference, 832-840.
16. Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4), 193-202. doi:10.1007/BF00344251
17. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning: MIT Press.
18. Goutte, C., & Gaussier, E. (2005). A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. Paper presented at the Advances in Information Retrieval, Berlin, Heidelberg.
19. Graves, A. (2012). Long Short-Term Memory. In A. Graves (Ed.), Supervised Sequence Labelling with Recurrent Neural Networks (pp. 37-45). Berlin, Heidelberg: Springer Berlin Heidelberg.
20. Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2009). A novel connectionist system for unconstrained handwriting recognition. IEEE Trans Pattern Anal Mach Intell, 31(5), 855-868. doi:10.1109/TPAMI.2008.137
21. Gupta, A. Long Short Term Memory Networks Explanation. Retrieved from https://www.geeksforgeeks.org/long-short-term-memory-networks-explanation/
22. Gupta, B. B., & Sheng, Q. Z. (2019). Machine Learning for Computer and Cyber Security: Principle, Algorithms, and Practices: CRC Press.
23. Hankey, J. M., Perez, M. A., & McClafferty, J. A. (2016). Description of the SHRP 2 Naturalistic Database and the Crash, Near-Crash, and Baseline Data Sets.
24. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. Paper presented at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
25. Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735
26. Huang, P. T. (2019). Accident Identification and Collision Probability Estimation for Roadway Traffic: Applications of SVM and Random Forests. National Cheng Kung University, Retrieved from http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi?o=dnclcdr&s=id=%22107NCKU5119025%22.&searchmode=basic
27. Institute of Transportation. (2014). 一年車禍損失約4,750億. Retrieved from https://www.motc.gov.tw
28. Jain, L. C., & Ogiela, M. R. (2012). Computational intelligence paradigms in advanced pattern classification. [electronic resource]: Springer Berlin Heidelberg.
29. Jo, K., Lee, M., Kim, D., Kim, J., Jang, C., Kim, E., Kim, S., Lee, D., Kim, C., Kim, S., Huh, K., & Sunwoo, M. (2013). Overall Reviews of Autonomous Vehicle A1 - System Architecture and Algorithms. IFAC Proceedings Volumes, 46(10), 114-119. doi:10.3182/20130626-3-au-2035.00052
30. Keras: The Python Deep Learning library. Retrieved from https://keras.io/
31. Khandelwal, R. (2019). Overview of different Optimizers for neural networks. Retrieved from https://medium.com/datadriveninvestor/overview-of-different-optimizers-for-neural-networks-e0ed119440c3
32. Kingma, D., & Ba, J. (2014). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations.
33. Lawrence, S., Giles, C. L., Ah Chung, T., & Back, A. D. (1997). Face recognition: a convolutional neural-network approach. IEEE Transactions on Neural Networks, 8(1), 98-113. doi:10.1109/72.554195
34. Ministry of Transportation and Communications. (2019). 機動車輛登記數. Retrieved from https://stat.motc.gov.tw/mocdb/stmain.jsp?sys=100
35. Moody, J., Bailey, N., & Zhao, J. (2019). Public perceptions of autonomous vehicle safety: An international comparison. Safety Science. doi:10.1016/j.ssci.2019.07.022
36. National Police Agency. (2019). 事故概況. Retrieved from https://ba.npa.gov.tw/npa/stmain.jsp?sys=100
37. Nielsen, M. A. (2018). Neural Networks and Deep Learning. In: Determination Press.
38. Pan, S. J., & Yang, Q. (2010). A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345-1359. doi:10.1109/TKDE.2009.191
39. Park, S., Seonwoo, Y., Kim, J., Kim, J., & Oh, A. (2019). Denoising Recurrent Neural Networks for Classifying Crash-Related Events. IEEE Transactions on Intelligent Transportation Systems, 1-12. doi:10.1109/tits.2019.2921722
40. Paul, A., Chauhan, R., Srivastava, R., & Baruah, M. (2016). Advanced Driver Assistance Systems. Paper presented at the SAE Technical Paper Series.
41. Rosebrock, A. (2019). Keras: Multiple Inputs and Mixed Data. Retrieved from https://www.pyimagesearch.com/2019/02/04/keras-multiple-inputs-and-mixed-data/
42. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252. doi:10.1007/s11263-015-0816-y
43. SAE. (2014). Automated Driving Levels of Driving Automation are Defined in New SAE International Standard J3016. Society of Automotive Engineers International.
44. Sak, H., Senior, A., & Beaufays, F. (2014). Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition.
45. Shi, W., Alawieh, M. B., Li, X., & Yu, H. (2017). Algorithm and hardware implementation for visual perception system in autonomous vehicle: A survey. Integration, 59, 148-156. doi:10.1016/j.vlsi.2017.07.007
46. Shi, X., Wong, Y. D., Li, M. Z. F., & Chai, C. (2018). Key risk indicators for accident assessment conditioned on pre-crash vehicle trajectory. Accid Anal Prev, 117, 346-356. doi:10.1016/j.aap.2018.05.007
47. Singh, S. (2015). Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey. Washington, DC: National Highway Traffic Safety Administration., 1-2.
48. Sparrow, R., & Howard, M. (2017). When human beings are like drunk robots: Driverless vehicles, ethics, and the future of transport. Transportation Research Part C: Emerging Technologies, 80, 206-215. doi:10.1016/j.trc.2017.04.014
49. Strickland, M., Fainekos, G., & Amor, H. B. (2018). Deep Predictive Models for Collision Risk Assessment in Autonomous Driving. Paper presented at the 2018 IEEE International Conference on Robotics and Automation (ICRA).
50. Tak, S., Woo, S., & Yeo, H. (2016). Study on the framework of hybrid collision warning system using loop detectors and vehicle information. Transportation Research Part C: Emerging Technologies, 73, 202-218. doi:10.1016/j.trc.2016.10.014
51. Thorpe, C., Hebert, M. H., Kanade, T., & Shafer, S. A. (1988). Vision and Navigation for the Carnegie-Mellon Navlab. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 10, no. 3, 362-373.
52. Tiezzi, M., Melacci, S., Maggini, M., & Frosini, A. (2018). Video Surveillance of Highway Traffic Events by Deep Learning Architectures. Paper presented at the Artificial Neural Networks and Machine Learning – ICANN 2018, Cham.
53. Tiwari, S. Activation functions in Neural Networks. Retrieved from https://www.geeksforgeeks.org/activation-functions-neural-networks/
54. Trimble, T. E., Bishop, R., Morgan, J. F., & Blanco, M. (2014). Human factors evaluation of level 2 and level 3 automated driving concepts: Past research, state of automation technology, and emerging system concepts. Washington, DC: National Highway Traffic Safety Administration.
55. Tseng, C.-M. (2012). Social-demographics, driving experience and yearly driving distance in relation to a tour bus driver’s at-fault accident risk. Tourism Management, 33(4), 910-915. doi:10.1016/j.tourman.2011.09.011
56. Wang, Y., & Kato, J. (2017). Collision Risk Rating of Traffic Scene from Dashboard Cameras. Paper presented at the 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA).
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2025-08-03起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2025-08-03起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw