進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-2608202013381000
論文名稱(中文) 使用深度學習結構光於面板3D對位系統及系統穩定性和加速
論文名稱(英文) 3D Panel Alignment System Using Deep Learning-Based Structured Light, and System Stability and Speedup
校院名稱 成功大學
系所名稱(中) 資訊工程學系
系所名稱(英) Institute of Computer Science and Information Engineering
學年度 108
學期 2
出版年 109
研究生(中文) 葉勁毅
研究生(英文) Jin-Yi Ye
學號 P76074305
學位類別 碩士
語文別 英文
論文頁數 107頁
口試委員 指導教授-連震杰
指導教授-郭淑美
口試委員-梁勝富
口試委員-陳洳瑾
口試委員-凃瀞珽
中文關鍵字 三維測量  多頻相位移  結構光  立體視覺  深度學習 
英文關鍵字 3D measurement  multi-frequency phase shift  structured light  stereo vision  deep learning 
學科別分類
中文摘要 三維重建技術常用於工業檢測以實現產線的自動化生產,在本研究中,使用立體視覺搭配結構光之架構,重建物體表面之深度資訊,以用於工業檢測之計算。立體視覺技術需使用兩顆相機,相機經過校正後使左右影像處於同一影像座標,投影機投影多頻相位條紋圖進行編碼,透過外差原理及相位展開,解碼出左右編碼之對照表,作為左右影像搜尋對應的參照,確認對應關係後再以三角測量法進行深度資訊的還原。獲得三維資訊後,再以影像處理等演算法對待測物進行深度檢測,角度測量以及偏移補正等計算。兩台相機以同步線連接,實現同步取像,計算過程中使用多執行緒、預先建立查找表等方法加速,以符合產線上之時間需求。經實驗測試後,重建點雲在XY軸準確度為0.09mm,Z軸準確度達到0.01mm,重建時間約為7秒,在研究後半部以深度學習方法進行搜尋對應點及三維重建,相較於多頻相位移方法而言,重建速度可以提升至約4.5秒,但是Z軸準確度只能達到1mm。
英文摘要 Three-dimensional reconstruction technology is often used in industrial inspection to realize the automated production of production lines. In this study, stereo vision combined with structured light architecture is used to reconstruct the depth information of the object surface for calculation of industrial inspection.
Stereo vision technology needs to use two cameras. After the camera is calibrated, the left and right images are at the same image coordinates. The projector projects a multi-frequency phase-shift pattern for encoding. Through the principle of heterodyne and phase unwrapping, a comparison table of left and right image is decoded from left and right pattern. The image searches for the corresponding points, confirms the corresponding relationship, and then restores the depth information by Trigonometric Measurements. After obtaining the three-dimensional information, image processing and other algorithms are used to perform depth detection, angle measurement and offset correction calculations for the object to be measured. The two cameras are connected by a synchronization line to achieve synchronized acquisition. The calculation process uses multiple threads and pre-established look-up tables to speed up the calculation process to meet the time requirements of the production line. After experimental testing, the reconstructed point cloud has an accuracy of 0.09mm on the XY axis and 0.01mm on the Z axis, and the reconstruction time is about 7 seconds. In the second half of the study, the corresponding point search and 3D reconstruction were performed by the deep learning method. In traditional methods, the reconstruction speed can be increased to about 4.5 seconds, but the accuracy of the Z axis can only reach 1mm.
論文目次 摘要 I
ABSTRACT III
誌謝 V
TABLE OF CONTENTS VII
LISTS OF TABLE IX
LIST OF FIGURES X
CHAPTER 1 INTRODUCTION 1
1.1 MOTIVATION 1
1.2 RELATED WORKS 4
1.3 ORGANIZATION OF THESIS 11
1.4 CONTRIBUTION 15
CHAPTER 2 SYSTEM SETUP AND SPECIFICATION 16
2.1 SYSTEM SETUP 16
2.2 FUNCTIONS AND EQUIPMENT SPECIFICATION 17
CHAPTER 3 3D OBJECT RECONSTRUCTION BASED ON MULTI-FREQUENCY PHASE-SHIFTING STRUCTURED LIGHT AND STEREO 21
3.1 STEREO CAMERA CALIBRATION AND RECTIFICATION 23
3.2 PROJECTED PATTERN CREATION 25
3.3 RELATIVE PHASE MAP, BEAT MAP, ABSOLUTE PHASE MAPS CREATIONS BY DECODING PROCESS 29
3.4 3D OBJECT RECONSTRUCTION 37
CHAPTER 4 SYSTEM STABILITY AND SPEEDUP 53
4.1 MULTIPLE THREAD USING OPENMP 53
4.2 BUILDING LOOK-UP-TABLE FOR RELATIVE PHASE MAP AND QUADRATIC INTERPOLATION 56
4.3 SYSTEM STABILITY 60
4.4 CAMERA SYNCHRONIZATION 62
4.5 EXPERIMENTAL RESULTS 63
CHAPTER 5 APPLICATION: 3D PANEL ALIGNMENT SYSTEM 67
5.1 Z-AXIS WARPAGE INSPECTION 68
5.2 X-Y AXIS CORRECTION 70
5.3 EXPERIMENTAL RESULTS 80
CHAPTER 6 3D RECONSTRUCTION USING GWCNET 83
6.1 FEATURE EXTRACTION 86
6.2 4D COST VOLUME CONSTRUCTION 88
6.3 3D CONVOLUTION AGGREGATION AND DISPARITY REGRESSION 94
6.4 EXPERIMENTAL RESULTS 97
CHAPTER 7 CONCLUSION AND FUTURE WORKS 102
REFERENCE 104
參考文獻 [1] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly Media, 2008.
[2] A. Fusiello, E. Trucco, and A. Verri, "A Compact Algorithm for Rectification of Stereo Pairs," Machine Vision and Applications, Vol. 12, No. 1, pp. 16-22, 2000.
[3] M. Gupta, A. Agrawal, A. Veeraraghavan and S.G. Narasimhan, "Structured Light 3D Scanning in the Presence of Global Illumination," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
[4] K. Herakleous, and C. Poullis, "3DUNDERWORLD-SLS: an Open-source Structured- light Scanning System for Rapid Geometry Acquisition," arXiv preprint arXiv:1406.6595, 2014.
[5] D. Lanman, and G. Taubin, "Build Your Own 3D Scanner: 3D Photography for Beginners," ACM SIGGRAPH 2009 Courses, 2009.
[6] D. Li, H. Zhao, and H. Jiang, “Fast Phase-based Stereo Matching Method for 3D Shape Measurement,” International Symposium on Optomechatronic Technologies (ISOT), 2010.
[7] P.F. Luo, Y.J. Chao, and M.A. Sutton, "Application of Stereo Vision to Three-dimensional Deformation Analyses in Fracture Experiments," Optical Engineering, Vol. 33, No. 3. pp. 981-991, 1994.
[8] J.S. Massa, G.S. Buller, A.C. Walker, S. Cova, M. Umasuthan, and A.M. Wallace, "Time-of-flight Optical Ranging System Based on Time-correlated Single-photon Counting," Applied Optics, Vol. 37, No. 31, pp. 7298-7304, 1998.
[9] D. Moreno, K. Son, and G. Taubin, "Embedded Phase Shifting: Robust Phase Shifting with Embedded Signals," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
[10] C. Reich, R. Ritter, and J. Thesing. "White light heterodyne principle for 3D-measurement." Sensors, Sensor Systems, and Sensor Data Processing. Vol. 3100. International Society for Optics and Photonics, 1997.
[11] J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, "A State of the Art in Structured Light Patterns for Surface Profilometry," Pattern Recognition, Vol. 43, No. 8, pp. 2666-2680, 2010.
[12] J. Salvi, J. Pages, and J. Batlle, "Pattern Codification Strategies in Structured Light Systems," Pattern Recognition, Vol. 37, No .4, pp. 827-849, 2004.
[13] S.M. Seitz, and C.R. Dyer, "View Morphing," Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 1996.
[14] S. Zhang, "Digital Multiple Wavelength Phase Shifting Algorithm," International Society for Optics and Photonics, 2009.
[15] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, 2000.
[16] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, 2000.
[17] Li, F., et al. "Depth acquisition with the combination of structured light and deep learning stereo matching." Signal Process. Image Commun. 75, 111–117 (2019).
[18] Kniaz, V. V. "FringeMatchNet: effective stereo matching onboard of mobile structured light 3D scanner." Optics for Arts, Architecture, and Archaeology VII, 11058 152 –160 International Society for Optics and Photonics, SPIE(2019).
[19] Q. Du, R. Liu, B. Guan, Y. Pan, S. Sun. " Stereo-Matching network for structured light." IEEE Signal Process. Lett., 26 (1) (2019), pp. 164-168.
[20] G. Yang, J. Manela, M. Happold, and D. Ramanan. Hierarchical deep stereo matching on high resolution images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5515–5524, 2019.
[21] F. Zhang, V. Prisacariu, R. Yang, and P. H. Torr, “Ga-net: Guided aggregation net for end-to-end stereo matching,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 185–194.
[22] K. He, X. Zhang, S. Ren, and J. Sun. "Deep residual learning for image recognition." Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on. IEEE, 2016.
[23] A. Geiger, P. Lenz, and R. Urtasun. “Are we ready for autonomous driving? the kitti vision benchmark suite.” Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2012.
[24] J. Zbontar and Y. LeCun. Stereo matching by training a convolutional neural network to compare image patches. Journal of Machine Learning Research, 17(1-32):2, 2016.
[25] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234–241. Springer, 2015.
[26] J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan. Cascade residual learning: A two-stage convolutional neural network for stereo matching. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017
[27] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.
[28] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016
[29] J.-R. Chang and Y.-S. Chen. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410–5418, 2018.
[30] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry. End-to-end learning of geometry and context for deep stereo regression. In Proceedings of the IEEE International Conference on Computer Vision, pages 66–75, 2017.
[31] Y. Wu and K. He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 3–19, 2018.
[32] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with left-right consistency. Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017.
[33] X. Guo, K. Yang, W. Yang, X. Wang, and H. Li. Group-wise correlation stereo network. Computer Vision and Pattern Recognition (CVPR), 2019 IEEE Conference on. IEEE, 2019.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2025-07-24起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2025-07-24起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw