進階搜尋


下載電子全文  
系統識別號 U0026-0406201723260200
論文名稱(中文) 用於影像室內定位之行動裝置單機商標識別技術
論文名稱(英文) Logo Recognition on Mobile End Devices for Image-Based Indoor Positioning in Shopping Malls
校院名稱 成功大學
系所名稱(中) 製造資訊與系統研究所
系所名稱(英) Institue of Manufacturing Information and Systems
學年度 105
學期 2
出版年 106
研究生(中文) 王暄翔
研究生(英文) Shiuan-Shiang Wang
學號 P96024061
學位類別 碩士
語文別 英文
論文頁數 54頁
口試委員 指導教授-蔡佩璇
口試委員-謝孫源
口試委員-蔡孟勳
中文關鍵字 室內定位  影像比對  商標識別  行動裝置 
英文關鍵字 Indoor positioning  Image Matching  Logo Recognition  Mobile Device 
學科別分類
中文摘要 室內定位技術被普遍的應用在商場內的適地性服務中。現行的室內定位方法有: 1) 根據在環境場域中所部屬的無線通訊設備的位置,以及其與顧客行動裝置之間的距離來進行多邊測量定位。2) 根據顧客行動裝置上的感測器資訊以預測顧客的相對移動路徑。以及3) 蒐集環境場域中的各種資訊來作為特徵,並建立位置與特徵之間關係的學習模組。然而以上的方法會分別面臨硬體成本過高、累積誤差或是特徵蒐集與模組建構困難等問題。
近年來,隨著影像辨識技術的發展,基於影像辨識的室內定位技術陸續的被提出。其概念是利用擷取環境場域中的視覺特徵來比對資料庫中含有地理資訊標籤的照片,以預測顧客的位置。由於其準確度是仰賴環境場域中照片的數量,且影像辨識為幾十年間充分被研究過的領域,因此其特徵蒐集容易且其建構成本較上述方法來的低。由於影像辨識在資源有限的行動裝置上運行的計算成本較高,現行的影像定位方法大多利用客戶端-伺服器的架構。然而,資料傳輸會有通訊、能源消耗以及安全性等問題。因此,本篇論文提出了在行動裝置單機上進行影像辨識的架構,並以提供賣
場中的影像定位為使用情境。
本篇論文嘗試了許多演算法以及參數的組合來測試影像辨識在手機上運行的效能,最後決定以ORB 特徵偵測演算法搭配BRIEF 特徵描述演算法來對訓練照片截取特徵,並透過階層式分群樹將一張照片中的所有特徵點轉換成一個256 維度的字辭袋特徵。我們透過將階層式分群樹的結構以及所有訓練照片的字辭袋特徵放到行動裝置上,來達成在行動裝置上單機進行影像辨識。
本篇論文以新竹一大型購物商場做為實際應用的測試環境,並以其中商家的商標做為辨識目標,達成了96.3% 的辨識準確度,且辨識一張照片只需要0.31 秒以及0.04焦耳的能量消耗。另一方面,根據我們的辨識架構,必要的訓練檔案在166 張照片的規模下僅需364 KB 的空間,而我們所實做的應用程式在智慧型手機上運行也只需要55 MB 的記憶體,輕量的特性使得本架構可以被應用於影像室內定位之中。
英文摘要 Indoor positioning techniques are widely applied in marketing scenarios for providing Location Based Services. The commonly used ones are 1) Radio-based techniques, which use the position of wireless infrastructure devices in the field and their distance to the customer to do multi-lateration. 2) Inertial navigation, which relies on the data from sensors on the mobile device to predict the customer’s relative moving path. and 3) Collecting the environmental informations as features, then use learning models to get the relationship between the position and the features. However, the techniques usually face the problems of high hardware costs,
cumulative errors and difficulty on collecting the features and constructing the model.
In recent years, several camera-based positioning techniques have been proposed, which use the visual features in the field to match images in the database and then use the geo-tag of the matched image as the customer’s position. While the precision is rely on the number of images, the costs will be less than the three mentioned above. Due to the limited capability and limited resources of the mobile devices, the state-of-the-art image matching techniques with mobile devices rely on the client-and-server architecture, which may lead to communication, power consumption and privacy problems. Hence in this thesis, we proposed an image matching architecture on mobile end devices without networks.
We use the ORB-BRIEF pair as feature detector-descriptor pair to extract features from an image; all the extracted features of an image will be parsed through a hierarchical k-means tree to convert to a BoW feature. The structure of the tree and all the training BoW features will be stored on the mobile device for matching without networks.
We use the logos of the stores in a shopping mall in Hsinchu, Taiwan as our recognition target to evaluate our work. Via simulation, the proposed work can achieve 96.3 % precision on matching, query an image in 308 ms and the power consumption is 0.04 Joule on an ASUS Zenfone 5 LTE mobile phone. Furthermore, the memory used and storage spaces required for 166 training images are only 55.2 MB and 364 KB correspondingly.
論文目次 摘要i
Abstract ii
致謝iii
Table of Contents iv
List of Tables vi
List of Figures vii
Chapter 1. Introduction 1
1.1. Motivation and Background Knowledge . . . . . . . . . . . . . . . . . . . 1
1.2. Research Objective, Difficulties and Contribution . . . . . . . . . . . . . . 5
1.2.1. Research Objective . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2. Difficulties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.3. Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3. Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Chapter 2. Related Work 8
2.1. Image Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1. Image Matching Architecture . . . . . . . . . . . . . . . . . . . . . 8
2.1.2. Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.3. Feature Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.4. Voting for Label . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2. Client-and-Server Architecture for Image Matching with Mobile Devices . 18
2.3. CaPSuLe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 3. Logo Recognition on Mobile End Devices 22
3.1. Proposed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1. The HKT_structure file . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.2. The Training_BoW_Features file . . . . . . . . . . . . . . . . . . . 23
3.2. Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 4. Experimental Design and Results 27
4.1. Dataset and Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2. Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2.1. Metrics for Image Matching . . . . . . . . . . . . . . . . . . . . . 29
4.2.2. Metrics for Running on Mobile Devices . . . . . . . . . . . . . . . 30
4.3. Target Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4. Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.1. Design of Experiment A: The cropping level of training images . . . 33
4.4.2. Design of Experiment B: The best combinations of feature detector and descriptor pair . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.4.3. Design of Experiment C: Feature length of BoW/binary Bow . . . . 35
4.5. Experimental Results and Analysis . . . . . . . . . . . . . . . . . . . . . . 35
4.5.1. Experiment A: The cropping level of training images . . . . . . . . 36
4.5.2. Experiment B: The best combinations of feature detector and descriptor pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.5.3. Experiment C: Feature length of BoW/binary Bow . . . . . . . . . 44
4.6. Comparison between CaPSuLe and Our Work . . . . . . . . . . . . . . . . 48
Chapter 5. Conclusion 51
References 52
參考文獻 [1] OpenCV Adventure. Star Feature Detector. http://experienceopencv.blogspot.co.nz/2011/01/star-feature-detector.html, 3 2016.
[2] Motilal Agrawal, Kurt Konolige, and Morten Rufus Blas. Censure: Center surround extremas for realtime feature detection and matching. In European Conference on Computer Vision, pages 102–115. Springer, 2008.
[3] amazon. Amazon Go. https://www.amazon.com/b?node=16008589011.
[4] Paramvir Bahl and Venkata N Padmanabhan. Radar: An in-building rf-based user location and tracking system. In INFOCOM 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, volume 2, pages 775–784. Ieee, 2000.
[5] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer vision and image understanding, 110(3):346–359, 2008.
[6] Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua. Brief: Binary robust independent elementary features. In European conference on computer vision, pages 778–792. Springer, 2010.
[7] H Stewart Cobb. GPS pseudolites: Theory, design, and applications. PhD thesis, Stanford University, 1997.
[8] Gabriella Csurka, Christopher Dance, Lixin Fan, Jutta Willamowski, and Cédric Bray. Visual categorization with bags of keypoints. In Workshop on statistical learning in computer vision, ECCV, volume 1, pages 1–2. Prague, 2004.
[9] Opencv dev team. Feature Detection and Description. http://docs.opencv.org/2.4.13/modules/features2d/doc/feature_detection_and_description.html, 8 2011.
[10] Opencv dev team. Opencv Documentation. http://docs.opencv.org/2.4.13/, 8 2011.
[11] Frédéric Evennou and François Marx. Advanced integration of wifi and inertial navigation systems for indoor mobile positioning. Eurasip journal on applied signal processing, 2006:164–164, 2006.
[12] Haosheng Huang and Georg Gartner. A survey of mobile indoor navigation systems. In Cartography in Central and Eastern Europe, pages 305–319. Springer, 2009.
[13] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604–613. ACM, 1998.
[14] Jianqiu Ji, Jianmin Li, Shuicheng Yan, Bo Zhang, and Qi Tian. Super-bit locality sensitive hashing. In Advances in Neural Information Processing Systems, pages 108–116, 2012.
[15] Axel Küpper. Location-based services: fundamentals and operation. John Wiley & Sons, 2005.
[16] Stefan Leutenegger, Margarita Chli, and Roland Y Siegwart. Brisk: Binary robust invariant scalable keypoints. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2548–2555. IEEE, 2011.
[17] Jason Zhi Liang, Nicholas Corso, Eric Turner, and Avideh Zakhor. Image based localization in indoor environments. In Computing for Geospatial Research and Application (COM. Geo), 2013 Fourth International Conference on, pages 70–75. IEEE, 2013.
[18] Jo Agila Bitsch Link, Paul Smith, Nicolai Viol, and Klaus Wehrle. Footpath: Accurate map-based indoor navigation using smartphones. In IPIN, pages 1–8. Citeseer, 2011.
[19] David G Lowe. Object recognition from local scale-invariant features. In Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2, pages 1150–1157. Ieee, 1999.
[20] Jiri Matas, Ondrej Chum, Martin Urban, and Tomás Pajdla. Robust wide-baseline stereo from maximally stable extremal regions. Image and vision computing, 22(10):761–767, 2004.
[21] Yongshik Moon, Chen Luo, Anshumali Shrivastava, and Krishna Palem. Capsule: Camera based positioning system using learning. In Proc. of international IEEE System on-Chip Conference, 2016.
[22] Qualcomm Developer Network. Trepn Power Profiler. https://developer.qualcomm.com/software/trepn-power-profiler.
[23] Qualcomm. Trepn Power Profiler-Google Play Android Applications. https://play.google.com/store/apps/details?id=com.quicinc.trepn, 3 2016.
[24] Nishkam Ravi, Pravin Shankar, Andrew Frankel, Ahmed Elgammal, and Liviu Iftode. Indoor localization using camera phones. In Seventh IEEE Workshop on Mobile ComputingSystems & Applications (WMCSA’06 Supplement), pages 49–49. IEEE, 2006.
[25] Edward Rosten and Tom Drummond. Fusing points and lines for high performance tracking. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 2, pages 1508–1515. IEEE, 2005.
[26] Edward Rosten and Tom Drummond. Machine learning for high-speed corner detection. In European conference on computer vision, pages 430–443. Springer, 2006.
[27] Edward Rosten, Reid Porter, and Tom Drummond. Faster and better: A machine learning approach to corner detection. IEEE transactions on pattern analysis and machine intelligence, 32(1):105–119, 2010.
[28] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient alternative to sift or surf. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2564–2571. IEEE, 2011.
[29] William Storms, Jeremiah Shockley, and John Raquet. Magnetic field navigation in an indoor environment. In Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), 2010, pages 1–10. IEEE, 2010.
[30] Shiuan-Shiang Wang, Pei-Hsuan Tsai, and Wei-Shuo Li. Logo recognition for image-based indoor positioning systems on mobile devices. In Proceedings of the ASE BigData & SocialInformatics 2015, page 59. ACM, 2015.
[31] Martin Werner. Basic positioning techniques. In Indoor Location-Based Services, pages 73–99. Springer, 2014.
[32] Oliver Woodman and Robert Harle. Pedestrian localisation for indoor environments. In Proceedings of the 10th international conference on Ubiquitous computing, pages 114–123. ACM, 2008.
[33] Wendong Xiao, Wei Ni, and Yue Khing Toh. Integrated wi-fi fingerprinting and inertial sensing for indoor positioning. In IPIN, pages 1–6, 2011.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2020-09-01起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2020-09-01起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw