進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-2805201910440800
論文名稱(中文) 深度學習及近景攝影測量於挖土機監測之應用
論文名稱(英文) Using Deep Learning and Close-range Photogrammetry for Excavator Monitoring
校院名稱 成功大學
系所名稱(中) 測量及空間資訊學系
系所名稱(英) Department of Geomatics
學年度 107
學期 2
出版年 108
研究生(中文) 陳品旭
研究生(英文) Pin-Xu Chen
學號 P66064081
學位類別 碩士
語文別 中文
論文頁數 77頁
口試委員 指導教授-饒見有
口試委員-林昭宏
口試委員-趙鍵哲
中文關鍵字 疏濬監測  物件偵測  物件追蹤 
英文關鍵字 Dredging monitoring  Object detection  Object tracking 
學科別分類
中文摘要 受地形及地質等條件影響,台灣的河川呈現坡陡流急、河流短促、上游侵蝕旺盛及下游堆積快速等特性,為保障河川流路穩定及河道週邊生命財產之安全,河川疏濬為河川治理的重點工程之一,而疏濬過程中採取之砂石亦可做為營建用之原物料等多種用途。然而,即便政府單位在近年建立之疏濬管理機制已有效降低不法行為發生機率,仍有不肖廠商利用超挖置換或越界採取較佳土石的方式謀取不當利益,且該違規行為廠商聞訊可立即回復原狀,執行取締舉證困難。
因此,本研究提出利用低成本的相機搭配雙天線GNSS-RTK進行挖土機的疏濬即時監測,包含挖掘之地理位置及相對地面的挖掘深度。過程中,透過事先率定的相機,拍攝得影像由基於卷積神經網路的You Only Look Once(YOLO)偵測挖斗於影像中的邊界框,後以Kernelized Correlation Filter(KCF)持續追蹤挖斗轉軸位置。透過攝影測量技術,挖斗轉軸之三維坐標可被計算,並使用事先率定的坐標轉換參數及雙天線GNSS-RTK量測值轉換為絕對地理坐標,最後可得到挖掘位置之地理坐標並推算相對地面的挖掘深度。
在精度評估中,對於不同時刻之挖掘深度差異量,使用本方法與高精度GNSS-RTK量測結果之較差皆優於50公分的容許誤差,在連續挖掘深度估計的測試中,本方法則可穩定持續長時間的追蹤及即時的挖掘深度計算。而在與商業軟體-Trimble HYDROpro DredgePack的比較中,估計之最大挖掘深度的最大較差僅為12公分,最小較差為2公分,說明了本方法以低成本設備計算的高準確度,並驗證了於實際挖土機監測作業的可行性。
英文摘要 In Taiwan, dredging is one of the major activity for river management. Earth, gravel, and sand obtained during dredging are valuable materials for construction and other purposes. However, even though the government has defined the regulations of river dredging, some unscrupulous companies still try to excavate over the allowed depth or area to obtain better materials for profit. In this study, in order to monitor the depth and geographic location of excavation during dredging in real time, calibrated low-cost cameras and a dual-GNSS antennas RTK are installed on the excavator. During excavation, the image coordinates of excavator bucket can be detected and tracked by using You Only Look Once (YOLO) and Kernelized Correlation Filters (KCF) from the video recorded by the camera. Based on photogrammetry techniques and GNSS positioning, the geographic coordinates and depth of excavation can be computed. In the performance evaluation, the accuracy of depth difference computed between two epochs is better than the required accuracy. Furthermore, our method also shows the capability of long-term monitoring and real-time computation in the experiment of continuous excavation depth estimation. Comparing with a commercial product, Trimble HYDROpro DredgePack, the maximum depths estimated during continuous excavation are similar between two methods, where the maximum difference is 12 cm, and the minimum difference is only 2 cm, which shows not only the high performance of our system but also the feasibility for real applications.
論文目次 第1章. 前言 1
1.1. 研究背景 1
1.2. 研究動機與目的 1
第2章. 文獻回顧 4
2.1. 河川疏濬制度沿革 4
2.2. 河川疏濬監控管理 6
2.3. 自動化挖土機監測技術 7
2.4. 深度學習及近景攝影測量技術 10
第3章. 實驗設備及測試區 12
3.1. 測試用相機 12
3.2. 雙天線GNSS接收儀 13
3.3. 測試區域及資料 14
第4章. 研究方法 17
4.1. 相機率定 18
4.1.1. Stereolabs ZED立體相機內方位率定 18
4.1.2. VACRON AVM-S231B車用攝影機內方位率定 19
4.1.3. Stereolabs ZED立體相機相對方位率定 20
4.2. 坐標轉換 20
4.2.1. 載體坐標系與地圖坐標系間之坐標轉換 21
4.2.2. 相機坐標系與載體坐標系間之坐標轉換 22
4.3. 目標偵測 24
4.3.1. You Only Look Once (YOLO) 24
4.3.2. 模型架構 26
4.3.3. 模型訓練 28
4.3.4. 模型評估 29
4.4. 目標追蹤 31
4.4.1. Kernelized Correlation Filter (KCF) 32
4.4.2. 追蹤目標的選定 32
4.4.3. 重新偵測的策略 33
4.5. 三維坐標計算 34
4.5.1. 光線-平面交會法 34
4.5.2. 空間前方交會法 37
4.6. 挖掘位置之地理坐標及深度計算 39
4.6.1. 挖掘位置之地理坐標 39
4.6.2. 挖掘深度 40
4.7. 評估方法 41
4.7.1. 挖掘深度差異量測精度及穩定度分析 41
4.7.2. 連續挖掘深度量測分析 42
4.7.3. 計算效能評估 42
第5章. 研究成果與分析 43
5.1. 相機率定成果 43
5.1.1. Stereolabs ZED立體相機內方位率定成果 43
5.1.2. VACRON AVM-S231B車用攝影機內方位率定成果 46
5.1.3. Stereolabs ZED立體相機相對方位率定 48
5.2. 坐標轉換成果 48
5.3. 目標偵測成果 51
5.3.1. Stereolabs ZED之目標偵測成果 51
5.3.2. AVM-S231B之目標偵測成果 54
5.4. 挖土機小臂之平面方程式計算成果 57
5.5. 挖掘深度差異量測精度及穩定度分析 58
5.6. 連續挖掘深度量測分析 61
5.6.1. 以ZED進行連續挖掘深度量測 62
5.6.2. 以AVM-S231B進行連續深度量測 66
5.6.3. 與Trimble HYDROpro DredgePack之成果比較 68
5.7. 計算效能評估 70
第6章. 結論與未來展望 71
6.1. 相機率定 71
6.2. 坐標轉換 71
6.3. 目標偵測 72
6.4. 挖土機小臂之平面方程式 72
6.5. 挖掘深度差異量測精度及穩定度分析 72
6.6. 連續深度量測分析 72
6.7. 計算效能評估 73
6.8. 未來展望 73
參考文獻 74
參考文獻 Baker, S., Matthews, I., 2004. Lucas-Kanade 20 years on: A unifying framework. International Journal of Computer Vision 56, 221-255.
Brunelli, R., 2009. Template matching techniques in computer vision: theory and practice. John Wiley & Sons.
Chi, S., Caldas, C.H., 2011. Automated Object Identification Using Optical Video Cameras on Construction Sites. Computer-Aided Civil and Infrastructure Engineering 26, 368-380.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L., 2009. Imagenet: A large-scale hierarchical image database, 2009 IEEE conference on computer vision and pattern recognition. IEEE, pp. 248-255.
Deng, L., Yu, D., 2013. Deep learning: Methods and applications. Foundations and Trends in Signal Processing 7, 197-387.
Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A., 2014. The Pascal Visual Object Classes Challenge: A Retrospective. International Journal of Computer Vision 111, 98-136.
Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D., 2010. Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 1627-1645.
Feng, C., Dong, S., Lundeen, K.M., Xiao, Y., Kamat, V.R., 2015. Vision-based articulated machine pose estimation for excavation monitoring and guidance, 32nd International Symposium on Automation and Robotics in Construction and Mining: Connected to the Future, ISARC 2015, June 15, 2015 - June 18, 2015. International Association for Automation and Robotics in Construction I.A.A.R.C), Oulu, Finland.
Feng, C., Kamat, V.R., Cai, H., 2018. Camera marker networks for articulated machine pose estimation. Automation in Construction 96, 148-160.
Fraser, C.S., 1997. Digital camera self-calibration. ISPRS Journal of Photogrammetry and Remote Sensing 52, 149-159.
Goodfellow, I., Bengio, Y., Courville, A., 2016. Deep learning. MIT press.
He, K., Girshick, R.B., Dollár, P., 2018. Rethinking ImageNet Pre-training. arXiv preprint arXiv:1811.08883.
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, June 26, 2016 - July 1, 2016. IEEE Computer Society, Las Vegas, NV, United states, pp. 770-778.
Henriques, J.F., Caseiro, R., Martins, P., Batista, J., 2015. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence 37, 583-596.
Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. ImageNet classification with deep convolutional neural networks, 26th Annual Conference on Neural Information Processing Systems 2012, NIPS 2012, December 3, 2012 - December 6, 2012. Neural information processing systems foundation, Lake Tahoe, NV, United States, pp. 1097-1105.
LeCun, Y., Bengio, Y., Hinton, G.J.n., 2015. Deep learning. Nature 521, 436-444.
Leica, 2019. Excavator, Machine Control Systems. https://leica-geosystems.com/products/machine-control-systems/excavator
Lin, T.-Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S., 2017. Feature pyramid networks for object detection, 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, July 21, 2017 - July 26, 2017. Institute of Electrical and Electronics Engineers Inc., Honolulu, HI, United States, pp. 936-944.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., Zitnick, C.L., 2014. Microsoft COCO: Common objects in context, 13th European Conference on Computer Vision, ECCV 2014, September 6, 2014 - September 12, 2014, PART 5 ed. Springer Verlag, Zurich, Switzerland, pp. 740-755.
Lucas, B.D., Kanade, T., 1981. Iterative Image Registration Technique with an Application to Stereo Vision, Proceedings of the 7th International Joint Conference on Artificial Intelligence., Vancouver, BC, Can, pp. 674-679.
Luhmann, T., Robson, S., Kyle, S., Harley, I., 2007. Close range photogrammetry. Wiley.
Lundeen, K.M., Dong, S., Fredricks, N., Akula, M., Seo, J., Kamat, V.R., 2016. Optical marker-based end effector pose estimation for articulated excavators. Automation in Construction 65, 51-64.
Martinez-Sanchez, H., Arias, P., Caamano, J.C., 2016. Close range photogrammetry: Fundamentals, principles and applications in structures. CRC Press, pp. 35-57.
Pratt, L.Y., 1992. Discriminability-Based Transfer between Neural Networks, NIPS.
Rau, J.-Y., Yeh, P.-C.J.S., 2012. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration. Sensors 12, 11271-11293.
Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection, 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, June 26, 2016 - July 1, 2016. IEEE Computer Society, Las Vegas, NV, United States, pp. 779-788.
Redmon, J., Farhadi, A., 2017. YOLO9000: Better, faster, stronger, 30th IEEE Conference on Computer Vision and Pattern Recognition, July 21, 2017 - July 26, 2017. Institute of Electrical and Electronics Engineers Inc., Honolulu, HI, United States, pp. 6517-6525.
Redmon, J., Farhadi, A., 2018. YOLOv3: An Incremental Improvement. arXiv preprint arXiv:1804.02767.
Rezazadeh Azar, E., Feng, C., Kamat, V.R., 2015. Feasibility of in-plane articulation monitoring of excavator arm using planar marker tracking. Journal of Information Technology in Construction 20, 213-229.
Rezazadeh Azar, E., Kamat, V.R., 2017. Earthmoving equipment automation: A review of technical advances and future outlook. Journal of Information Technology in Construction 22, 247-265.
Rezazadeh Azar, E., McCabe, B., 2012. Part based model and spatial-temporal reasoning to recognize hydraulic excavators in construction images and videos. Automation in Construction 24, 194-202.
Schmidhuber, J., 2015. Deep Learning in neural networks: An overview. Neural Networks 61, 85-117.
Soltani, M.M., Zhu, Z., Hammad, A., 2016. Towards Part-Based Construction Equipment Pose Estimation Using Synthetic Images, Construction Research Congress 2016: Old and New Construction Technologies Converge in Historic San Juan, CRC 2016, May 31, 2016 - June 2, 2016. American Society of Civil Engineers, San Juan, Puerto rico, pp. 980-989.
Soltani, M.M., Zhu, Z., Hammad, A., 2017. Skeleton estimation of excavator by detecting its parts. Automation in Construction 82, 1-15.
Soltani, M.M., Zhu, Z., Hammad, A., 2018. Framework for Location Data Fusion and Pose Estimation of Excavators Using Stereo Vision. Journal of Computing in Civil Engineering 32.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A., 2015. Going deeper with convolutions, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, June 7, 2015 - June 12, 2015. IEEE Computer Society, Boston, MA, United States, pp. 1-9.
Trimble, 2019. Weighing and Monitoring. https://construction.trimble.com/products-and-solutions/weighing-and-monitoring
van de Weijer, J., Schmid, C., Verbeek, J., Larlus, D., 2009. Learning color names for real-world applications. IEEE Transactions on Image Processing 18, 1512-1523.
Wolf, P.R., Dewitt, B.A., 2000. Elements of photogrammetry: with applications in GIS. McGraw-Hill New York.
Xu, J., Yoon, H.-S., 2019. Vision-based estimation of excavator manipulator pose for automated grading control. Automation in Construction 98, 122-131.
Yang, H.-C., Deng, K.-Z., Guo, G.-L., 2006. Monitoring technique for deformation measurement of similar material model with digital close-range photogrammetry. Meitan Xuebao/Journal of the China Coal Society 31, 292-295.
Yang, Y., Ramanan, D., 2011. Articulated pose estimation with flexible mixtures-of-parts, 2011 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011. IEEE Computer Society, pp. 1385-1392.
Yuan, C., Li, S., Cai, H., 2016. Vision-based excavator detection and tracking using hybrid kinematic shapes and key nodes. Journal of Computing in Civil Engineering 31.
Zhao, Z.-Q., Zheng, P., Xu, S.-T., Wu, X., 2019. Object Detection With Deep Learning: A Review.
Zhuang, H., 1995. Self-calibration approach to extrinsic parameter estimation of stereo cameras. Robotics and Autonomous Systems 15, 189-197.
經濟部水利署, 2009. 強化及落實運用衛星遙測於中央管河川(含淡水河及磺溪水系)河川區域之變遷監測技術.
經濟部水利署, 2011. 河川監控管理整合平台.
經濟部水利署, 2019a. 河川遠端監控簡介. http://iriver.wra.gov.tw/monitor_map.aspx
經濟部水利署, 2019b. 疏濬制度沿革. http://iriver.wra.gov.tw/dredge_mileage.aspx
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2022-06-30起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2022-06-30起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw