進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-2608202014090000
論文名稱(中文) 使用深度學習結構光於3D物體檢測系統與其系統穩定性及加速
論文名稱(英文) 3D Object Inspection System Using Deep Learning-Based Structured Light, and System Stability and Speedup
校院名稱 成功大學
系所名稱(中) 資訊工程學系
系所名稱(英) Institute of Computer Science and Information Engineering
學年度 108
學期 2
出版年 109
研究生(中文) 葉家瑋
研究生(英文) Chia-Wei Yeh
學號 P76074737
學位類別 碩士
語文別 英文
論文頁數 116頁
口試委員 指導教授-連震杰
指導教授-郭淑美
口試委員-張大緯
口試委員-陳洳瑾
口試委員-凃瀞珽
中文關鍵字 三角測量  相位移  結構光  立體視覺  深度學習  3D物體檢測 
英文關鍵字 Triangulation  Phase-shifting  Structured light  Stereo Vision  Deep Learning  3D Object Inspection 
學科別分類
中文摘要 目前結構光系統被廣泛應用於機器人視覺、工業量測、3D人臉辨識等領域中。本研究主要是藉由結構光立體視覺系統來量測物體並產生具有X,Y,Z軸三維資訊的點雲,再透過點雲來進行3D物體檢測,在產生點雲之前必須要先建立兩支相機各自的內部校正參數,再透過立體視覺校正找出兩相機之間的外部校正參數,經過結構光所建立的相位移編碼條紋找出兩相機影像平面上的對應關係,之後就可以透過兩支相機間的校正參數輔以三角測量法回推出待測物體表面在三維世界坐標系底下的X,Y,Z軸三維資訊。由於結構光系統的演算法非常耗時,本研究也針對整個系統加速,並針對在點雲成像結果不穩定這方面也做了改進。最後也導入了深度學習的模型進入我們的系統中取代了我們一對一尋找視差值的部分。實驗結果證明,所提出的方法能達到XY軸精度為0.028mm,Z軸精度達到0.01mm,整體執行時間由33s加速到7s,而在最後使用了深度學習的方式Z軸精度達到1mm。
英文摘要 At present, structured light systems are widely used in the fields of robot vision, industrial measurement, and 3D face recognition. This research mainly uses a structured light stereo vision system to measure objects and generate a point cloud with three-dimensional information on X, Y, and Z axes, and then use the point cloud to do 3D object inspection. Before generating the point cloud, the calibration parameters of two cameras must be created. Two cameras need to find their own intrinsic parameters first. Then we can find out the extrinsic parameters between the two cameras through stereo vision calibration. The corresponding relationship on the image plane of the two cameras can be found through the phase-shifting patterns created by structured light, and then we can use the calibration parameters and triangulation method to get the three-dimensional information of the surface of the object in the three-dimensional world coordinate. Since the algorithm of the structured light system is very time-consuming, this research also aims at the acceleration of the whole system, and also improves on the unstable point cloud results. Finally, the deep learning model was imported into our system to replace our one-to-one search for disparity value. Experimental results prove that the proposed method can achieve XY-axis accuracy of 0.028mm, Z-axis accuracy of 0.01mm, and the overall execution time accelerated from 33s to 7s. In the end, deep learning is used to achieve Z-axis accuracy of 1mm.
論文目次 摘要 I
ABSTRACT II
誌謝 III
TABLE OF CONTENTS V
LIST OF TABLES VII
LIST OF FIGURES VIII
CHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 1
1.2 RELATED WORKS 3
1.3 ORGANIZATION OF THESIS 10
1.4 CONTRIBUTIONS OF THESIS 14
CHAPTER 2 SYSTEM SETUP AND FUNCTION SPECIFICATION 15
2.1 SYSTEM SETUP 15
2.2 FUNCTION AND HARDWARE SPECIFICATION 16
CHAPTER 3 3D OBJECT INSPECTION USING STEREO-BASED STRUCTURED LIGHT BY MULTI-FREQUENCY PHASE-SHIFTING PATTERNS 21
3.1 PREPROCESSING: STEREO CAMERA CALIBRATION AND RECTIFICATION, AND PROJECTION PATTERN CREATION 25
3.2 RELATIVE PHASE MAP CREATION BY PHASE DEMODULATION AND BEAT MAP CREATION BY HETERODYNE PRINCIPLE 34
3.3 ABSOLUTE PHASE MAP CREATION BY PHASE UNWRAPPING PROCESS 41
3.4 DISPARITY MAP CREATION BY PHASE MATCHING 47
3.5 3D OBJECT RECONSTRUCTION AND INSPECTION 60
CHAPTER 4 SYSTEM STABILITY AND SPEEDUP 63
4.1 PATTERN PROJECT AND MULTI-FREQUENCY DECODING PARALLEL BY OPENMP AND SCHEDULING 63
4.2 RELATIVE PHASE MAP SPEEDUP USING LUT 65
4.3 POINT CLOUD SMOOTHING USING SUB-PIXEL ESTIMATION 67
4.4 OBLIQUE POINT CLOUD CORRECTION 68
4.5 EXPERIMENTAL RESULTS 71
CHAPTER 5 PANEL ALIGNMENT SUBSYSTEM 77
5.1 Z-AXIS WARPAGE INSPECTION 79
5.2 X-Y AXIS CORRECTION AND OFFSET ANGLE 81
5.3 EXPERIMENTAL RESULTS 88
CHAPTER 6 DISPARITY MAP CREATION USING GWCNET 91
6.1 EXTRACT LEFT AND RIGHT IMAGE FEATURE 95
6.2 4D COST VOLUME CONSTRUCTION 97
6.3 3D CONVOLUTION AGGREGATION AND DISPARITY REGRESSION 103
6.4 EXPERIMENTAL RESULTS 106
CHAPTER 7 CONCLUSION AND FUTURE WORKS 111
REFERENCE 113
參考文獻 [1] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly Media, 2008.
[2] A. Fusiello, E. Trucco, and A. Verri, "A Compact Algorithm for Rectification of Stereo Pairs," Machine Vision and Applications, Vol. 12, No. 1, pp. 16-22, 2000.
[3] M. Gupta, A. Agrawal, A. Veeraraghavan and S.G. Narasimhan, "Structured Light 3D Scanning in the Presence of Global Illumination," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
[4] K. Herakleous, and C. Poullis, "3DUNDERWORLD-SLS: an Open-source Structured-light Scanning System for Rapid Geometry Acquisition," arXiv preprint arXiv:1406.6595, 2014.
[5] D. Lanman, and G. Taubin, "Build Your Own 3D Scanner: 3D Photography for Beginners," ACM SIGGRAPH 2009 Courses, 2009.
[6] D. Li, H. Zhao, and H. Jiang, “Fast Phase-based Stereo Matching Method for 3D Shape Measurement,” International Symposium on Optomechatronic Technologies (ISOT), 2010.
[7] P.F. Luo, Y.J. Chao, and M.A. Sutton, "Application of Stereo Vision to Three-dimensional Deformation Analyses in Fracture Experiments," Optical Engineering, Vol. 33, No. 3. pp. 981-991, 1994.
[8] J.S. Massa, G.S. Buller, A.C. Walker, S. Cova, M. Umasuthan, and A.M. Wallace, "Time-of-flight Optical Ranging System Based on Time-correlated Single-photon Counting," Applied Optics, Vol. 37, No. 31, pp. 7298-7304, 1998.
[9] D. Moreno, K. Son, and G. Taubin, "Embedded Phase Shifting: Robust Phase Shifting with Embedded Signals," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
[10] C. Reich, R. Ritter, and J. Thesing. "White light heterodyne principle for 3D-measurement." Sensors, Sensor Systems, and Sensor Data Processing. Vol. 3100. International Society for Optics and Photonics, 1997.
[11] J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, "A State of the Art in Structured Light Patterns for Surface Profilometry," Pattern Recognition, Vol. 43, No. 8, pp. 2666-2680, 2010.
[12] J. Salvi, J. Pages, and J. Batlle, "Pattern Codification Strategies in Structured Light Systems," Pattern Recognition, Vol. 37, No .4, pp. 827-849, 2004.
[13] S.M. Seitz, and C.R. Dyer, "View Morphing," Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 1996.
[14] S. Zhang, "Digital Multiple Wavelength Phase Shifting Algorithm," International Society for Optics and Photonics, 2009.
[15] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, 2000.
[16] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, 2000.
[17] Li, F., et al. "Depth acquisition with the combination of structured light and deep learning stereo matching." Signal Process. Image Commun. 75, 111–117 (2019).
[18] Kniaz, V. V. "FringeMatchNet: effective stereo matching onboard of mobile structured light 3D scanner." Optics for Arts, Architecture, and Archaeology VII, 11058 152 –160 International Society for Optics and Photonics, SPIE(2019).
[19] Q. Du, R. Liu, B. Guan, Y. Pan, S. Sun. " Stereo-Matching network for structured light." IEEE Signal Process. Lett., 26 (1) (2019), pp. 164-168.
[20] G. Yang, J. Manela, M. Happold, and D. Ramanan. Hierarchical deep stereo matching on high resolution images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5515–5524, 2019.
[21] F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: Guided aggregation net for end-to-end stereo matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 185–194.
[22] K. He, X. Zhang, S. Ren, and J. Sun. "Deep residual learning for image recognition." Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. IEEE, 2016.
[23] A. Geiger, P. Lenz, and R. Urtasun. “Are we ready for autonomous driving? the kitti vision benchmark suite.” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.
[24] J. Zbontar and Y. LeCun. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research, 17(1-32):2, 2016.
[25] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234–241. Springer, 2015.
[26] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan. Cascade residual learning: A two-stage convolutional neural network for stereo matching. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017
[27] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.
[28] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016
[29] J.-R. Chang and Y.-S. Chen. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410–5418, 2018.
[30] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry. End-to-end learning of geometry and context for deep stereo regression. In Proceedings of the IEEE International Conference on Computer Vision, pages 66–75, 2017.
[31] Y. Wu and K. He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 3–19, 2018.
[32] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with left-right consistency. Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017.
[33] X. Guo, K. Yang, W. Yang, X. Wang, and H. Li. Group-wise correlation stereo network. Computer Vision and Pattern Recognition (CVPR), 2019 IEEE Conference on. IEEE, 2019.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2025-07-24起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2025-07-24起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw