進階搜尋


下載電子全文  
系統識別號 U0026-2708202001553200
論文名稱(中文) 使用三維視覺導引控制機器手臂於自動倉儲系統
論文名稱(英文) 3D Visual-Guided Robot Arm Control for Warehouse Automation System
校院名稱 成功大學
系所名稱(中) 資訊工程學系
系所名稱(英) Institute of Computer Science and Information Engineering
學年度 108
學期 2
出版年 109
研究生(中文) 張淳勉
研究生(英文) Soonmyun Jang
電子信箱 jsm890803.3@gmail.com
學號 P76077133
學位類別 碩士
語文別 英文
論文頁數 75頁
口試委員 指導教授-連震杰
共同指導教授-郭淑美
口試委員-劉彥辰
口試委員-吳進義
口試委員-方志偉
中文關鍵字 none 
英文關鍵字 Robot Arm  Hand-eye Calibration  Storing and Retrieving  Marker Detection  Contour Segmentation  Deep Learning 
學科別分類
中文摘要 none
英文摘要 Warehouse automation is greatly beneficial in improving a wide variety of industry. However, the prevalent automation methods apply industrial fields where systems are difficult to initialize the system and hard to recognize the system status. In this work, 3D visual-guided robot arm system with marker detection and object detection proposed.
There are two main parts in this study, including the system initialization and validation using marker detection and the storage and retrieval using magazine detection. The system is composed of two cameras for the stereo system, a robot arm and computer vision algorithms to form the system for detecting, classifying and picking objects by a robot arm. Besides, magazines which can store items such as nuts and bolts and a frame which can store magazines into its grids are used. Firstly, the system is initialized by marker detection method which detect markers positions on a frame and save frame and grid positions where the robot arm can approach to store or retrieve magazines. After that, using contour detection of deep learning method [12] and Hough line transform [17], correct magazine center position in a grid can be estimated. If an impact occurs such as earthquake, warehouse system must check the status if the system can be run perfectly. This study introduces solutions which avoid the above problem.
論文目次 Abstract IV
Acknowledgments V
Content VII
Content of Figure IX
Content of Table XII
Chapter 1. Introduction 1
1.1 Motivation 1
1.2 Related Works 8
1.3 Contribution 10
Chapter 2. System setup, Specification and Function 12
2.1 System Setup 12
2.2 Hardware Specification 21
2.3 ArUco Maker Library 27
Chapter 3. 3D Transformation Estimation from Robot Arm Base B to Grid Centers via ArUco Process and Stereo Camera 32
3.1 Eye-In-Hand Robot Arm Calibration 33
3.2 3D Transformation Estimation from Robot Arm Base to Grid Centers via ArUco Process and Stereo Camera 36
Chapter 4. 2D Center Position Alignment between Magazine and Grid using DFF-Net for Storage and Retrieval 44
4.1 DFF-Net: Training and Inference Frameworks 45
4.2 DFF-Net: Feature Extraction and Classification 48
4.3 2D Center Alignment between Magazine and Grid 53
Chapter 5. Experimental Results 59
5.1 Experimental Result of Marker Detection Accuracy 59
5.2 Experimental Result of Marker Detection Repeatability 63
5.3 Experimental Result of Contour Detection using DFF-Net 65
Chapter 6. Conclusion and Future Work 72
Reference 74
參考文獻 [1] Qiming Huang, “Automatically Visual-Based Robot Arm Calibration and Pick and Place for Motion Target”, National Cheng Kung University Graduation Thesis, 2017.
[2] Hyungwon Sung, Sukhan Lee, Daesik Kim, “A Robot-Camera Hand/Eye Self-Calibration System Using a Planar Target”, IEEE International Symposium on Robotics, 2013.
[3] R.Y. Tsai, R.K. Lenz, “A New Technique for Fully Autonomous and Efficient 3D Robotics Hand-Eye Calibration”, IEEE Transactions on Robotics and Automation, 1989.
[4] F.C. Park, B.J. Martin, “Robot Sensor Calibration: Solving AX = XB on the Euclidean Group”, IEEE Transactions on Robotics and Automation, 1994.
[5] Radu Horaud, Fadi Dornaika, “Hand-eye Calibration”, The International Journal of Robotics Research, 1995.
[6] N. Andreff, R. Horaud, B. Espiau, “On-line Hand-Eye Calibration”, IEEE Second International Conference on 3-D Digital Imaging and Modeling, 1999.
[7] K. Daniilidis, E. Bayro-Corrochano, “Hand-Eye Calibration Using Dual Quaternions”, IEEE Proceedings of 13th International Conference on Pattern Recognition, 1998.
[8] Min Y. Kim, Jae H. Kim, Hyungsuck Cho, “Hand-eye calibration of a robot arm with a 3D visual sensor”, Proceedings of SPIE - The International Society for Optical Engineering, 2001.
[9] Farah Hanani Mohammad Khasasi, Zulkhairi Mohd Yusof, Mohd Aswadi Alias, Ismail Adam “Development of Automated Storage and Retrieval System (ASRS)for Flexible Manufacturing System (FMS)”, Journal of Engineering Technology, 2016.
[10] Xue Yang, Hao Sun, Xian Sun, Menglong Yan, Zhi Guo, Kun Fu, “Position Detection and Direction Prediction for Arbitrary-Oriented Ships via Multiscale Rotation Region Convolutional Neural Network”, IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[11] Saining Xie, Zhuowen Tu, “Holistically Nested Edge Detection”, IEEE International Conference on Computer Vision, 2015.
[12] Yuan Hu, Yunpeng Chen, Xiang Li, Jiashi Feng, “Dynamic Feature Fusion for Semantic Edge Detection”, International Joint Conferences on Artificial Intelligence, 2019.
[13] Zhiding Yu, Chen Feng, Ming-Yu Liu, Srikumar Ramalingam, “CASENet Deep Category-Aware Semantic Edge Detection”, IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[14] David Acuna, Amlan Kar, Sanja Fidler, “Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations”, IEEE Conference on Computer Vision and Pattern Recognition, 2019.
[15] Xiao-Shan Gao, Xiao-Rong Hou, Jianliang Tang, Hang-Fei Cheng, “Complete Solution Classification for the Perspective-Three-Point Problem”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003 .
[16] Joel A. Hesch, Stergios I. Roumeliotis, “A Direct Least-Squares (DLS) Method for PnP”, IEEE International Conference on Computer Vision, 2011.
[17] Richard O. Duda, Peter E. Hart, “Use of the Hough Transformation to Detect Lines and Curves in Pictures”, Artificial Intelligence Center, 1971.
[18] Adrian Kaehler, Gary R. Bradski “Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library”, 2016.
[19] Rodrigo S. Xavier, Bruno M. F. da Silva, Luiz M. G. Goncalves, “Accuracy Analysis of Augmented Reality Markers for Visual Mapping and Localization”, IEEE Workshop of Computer Vision, 2017.
[20] Jia-Ren Chang, Yong-Sheng Chen, “Pyramid Stereo Matching Network PSMNet”, IEEE Conference on Computer Vision and Pattern Recognition, 2018.
[21] Xiaoyang Guo, Kai Yang, Wukui Yang, Xiaogang Wang, Hongsheng Li, “Group-wise Correlation Stereo Network”, IEEE Conference on Computer Vision and Pattern Recognition, 2019.
[22] Z. Zhang, “A flexible new technique for camera calibration”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2020-09-01起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2020-09-01起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw