進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-1808201813232800
論文名稱(中文) 結合深度學習技術之人機互動研究:以3D內容推薦與手勢互動為例
論文名稱(英文) Deep Learning in HCI: Case Studies of 3D Content Recommendation and Gesture Interaction
校院名稱 成功大學
系所名稱(中) 資訊工程學系
系所名稱(英) Institute of Computer Science and Information Engineering
學年度 106
學期 2
出版年 107
研究生(中文) 潘則佑
研究生(英文) Tse-Yu Pan
電子信箱 pzy385328@gmail.com
學號 P78031167
學位類別 博士
語文別 英文
論文頁數 91頁
口試委員 指導教授-胡敏君
口試委員-吳宗憲
口試委員-孫永年
口試委員-楊家輝
口試委員-陳陪殷
口試委員-許秋婷
口試委員-朱宏國
口試委員-陳華總
中文關鍵字 人機互動  深度學習  手勢互動  三維內容推薦 
英文關鍵字 Human Computer Interaction  Deep Learning  Gesture Interaction  3D Content Recommendation 
學科別分類
中文摘要 隨著電腦科學的快速發展與進步,電腦逐漸融入人類日常生活的每個層面,而人機互動的重要性也逐漸受到重視。大多的人機互動研究員著重探討於人類如何透過手勢互動當作輸入與電腦溝通及電腦提供怎樣的視覺內容能提升使用者體驗等等。在這篇論文中,我們藉由雙鑽石的設計思維過程,針對 (1) 手勢互動、(2) 3D內容推薦 這兩個主題來發想,並定義現實中遇到的問題,進而開發合適的人機互動演算法與系統。

以手勢互動來說,人們的手勢可以區分成兩種:大幅度動作的手勢、微小動作的手勢。在過去的人機互動系統中,大多僅使用大動作手勢或僅使用小動作手勢,且因為手勢辨識率不佳,大多僅提供少類別的簡單手勢辨識。為了提供更直覺且多樣的手勢互動,我們探討混合型多通道感測器在同時包含大幅度與小幅度動作類別手勢的應用中,該如何以深度學習技術建構對應的辨識模型。本篇論文的第一部分,我們運用深度信念網路、卷積類神經網路與循環神經網路,設計兩種手勢辨識模型,並以運動裁判訓練系統為例,驗證所提出的辨識方法在混合多種大小動作之應用中的可靠性。

視覺感知是人類認知的重要依據,在人機互動系統中高度影響著使用者體驗。特別是在擴增實境與虛擬實境應用中,好的3D視覺內容協調性將帶給使用者全然不同的感受。本篇論文的第二部分提出以三元組卷積神經網路估測3D內容風格合適度,將具有相同風格的3D內容推薦給數位內容編輯者,以創造出較佳的視覺體驗,並以3D家具推薦為例,驗證所提出的風格推薦演算法。
英文摘要 Computers have become indispensable in our daily life, and Human-Computer Interaction (HCI), which studies how people interact with computers, has drawn a lot of attention of researchers. Nowadays, many researches focus on two topics in HCI, i.e. how human use hand gestures as input to intuitively interact with computers, and what kinds of visual content can provide better human experience. In this dissertation, we dig into these two research topics by the Double Diamond design thinking method. To be more precise, we discover real world problems, define the target applications, develop the corresponding methodologies, and deliver the systems to the target user. Two reliable gesture interaction models are introduced and validated with a sports referee signal training application. Moreover, a 3D content recommendation method is proposed to provide harmonic visual experience for the user.

Human gestures can be generally divided into two categories, i.e., large motion gestures and subtle motion gestures. In the past, most HCI systems utilize only large motion gestures or only subtle motion gestures. Moreover, since it is difficult to develop a robust gesture recognition method, most systems merely provide few kinds of simple gestures for interaction. To provide more intuitive and various gestures for interaction, we investigate how to develop a robust recognition model for wearable hybrid and multi-channel sensors by using deep learning technology. The first part of this dissertation will introduce two gesture recognition models, which are designed based on Deep Belief Networks (DBN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN). We apply the two proposed models to recognize the sports referee signals, which involve both large motion and subtle motion gestures, and design a real-time system for sports referee training.

Visual perception provides vital clues for human cognition, which highly affects the user experience in HCI systems. In applications of augmented/virtual reality, 3D content with harmonic visual quality will bring remarkable user experience. In the second part of this dissertation, we propose a 3D content recommendation method based on Triplet CNN, which can evaluate the compatibility of each pair of 3D models and give proper suggestions for the content editor. We take 3D furniture recommendation system as an example to evaluate the proposed style-based Triplet CNN model.
論文目次 Abstract (Chinese) i
Abstract (English) ii
Table of Contents vi
List of Tables vii
List of Figures ix
Chapter 1. Introduction 1
1.1 Background of Gesture Interaction . . . 1
1.2 Background of 3D Content Recommendation . . . 2
1.3 Contributions of the Dissertation . . . 3
1.4 Organization of the Dissertation . . . 4
Chapter 2. Related Work 5
2.1 Gesture Interaction for Sports Referee Training System . . . 5
2.1.1 Vision-based Gesture Interaction . . . 5
2.1.2 Wearable-based Gesture Interaction . . . 6
2.1.3 Computer-aided Training System . . . 7
2.1.4 Deep Model for Wearable Sensor Technology . . . 7
2.1.5 Sports Referee Gesture Recognition . . . 9
2.2 3D Content Recommendation for Furniture Recommendation System . . . 10
2.2.1 Recommendation System . . . 10
2.2.2 Computer Aided Virtual Scene Generation . . . 10
2.2.3 Style Analysis and Metric Learning for 3D models . . . 11
Chapter 3. Gesture Interaction 13
3.1 Introduction . . . 13
3.2 Wearable Sensors with a IMU and Multi-channel EMG . . . 14
3.2.1 Signal Acquisition & Preprocessing . . . 19
3.2.2 Feature Extraction . . . 20
3.2.3 The Hierarchical Classification Scheme . . . 24
3.2.4 Experimental Results . . . 25
3.2.5 Summary . . . 36
3.3 Wearable Sensors with Multi-channel IMU . . . 36
3.3.1 Data Collection . . . 39
3.3.2 The Proposed ORS Recognition Model . . . 40
3.3.3 Experimental Results . . . 47
3.3.4 Summary . . . 52
Chapter 4. 3D Content Recommendation 53
4.1 Introduction . . . 53
4.2 Crowdsourcing Responses Collection . . . 57
4.3 Style Compatibility Learning Based on Cross-Class Triplet CNN . . . 60
4.3.1 View Selection . . . 63
4.3.2 Experimental Results . . . 65
4.3.3 Summary . . . 73
Chapter 5. Conclusion 75
5.1 Gesture Interaction . . . 75
5.2 3D Content Recommendation . . . 77
Chapter 6. Discussion and Future Work 79
References 81
參考文獻 [1] Htc vive and myo armband setup could train people to use prosthetic limbs. https:// www.wareable.com/ saves-the-day/ htc-vive-myo-armband-prostheticlimbs-998.
[2] Leap motion.
[3] Mircosoft kinect.
[4] P. K. Artemiadis and K. J. Kyriakopoulos. A switching regime model for the emg-based control of a robot arm. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 41(1):53–63, 2011.
[5] M. A. Bautista, A. Hernández-Vela, S. Escalera, L. Igual, O. Pujol, J. Moya, V. Violant, and M. T. Anguera. A gesture recognition system for detecting behavioral patterns of adhd. IEEE Transactions on Cybernetics, 46(1):136–147, 2016.
[6] S. Benatti, F. Casamassima, B. Milosevic, E. Farella, P. Schönle, S. Fateh, T. Burger, Q. Huang, and L. Benini. A versatile embedded platform for emg acquisition and gesture recognition. IEEE Transactions on Biomedical Circuits and Systems, 9(5):620–630, 2015.
[7] D. Blana, T. Kyriacou, J. M. Lambrecht, and E. K. Chadwick. Feasibility of using combined emg and kinematic signals for prosthesis control: a simulation study using a virtual reality environment. Journal of Electromyography and Kinesiology, 29:21–27, 2016.
[8] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In CVPR, volume 1, page 7, 2017.
[9] J. C. Chan, H. Leung, J. K. Tang, and T. Komura. A virtual reality dance training system using motion capture technology. IEEE Transactions on Learning Technologies, 4(2):187–195, 2011.
[10] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
[11] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[12] Y.-J. Chang, H.-Y. Lo, M.-S. Huang, and M.-C. Hu. Representative photo selection for restaurants in food blogs. In Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, pages 1–6. IEEE, 2015.
[13] K. Chen, K. Xu, Y. Yu, T.-Y. Wang, and S.-M. Hu. Magic decorator: automatic material suggestion for indoor digital scenes. ACM Transactions on Graphics (TOG), 34(6):232, 2015.
[14] X. Chen, J. Li, Q. Li, B. Gao, D. Zou, and Q. Zhao. Image2scene: Transforming style of 3d room. In Proceedings of the 23rd ACM international conference on Multimedia, pages 321–330. ACM, 2015.
[15] H. Cheng, L. Yang, and Z. Liu. Survey on 3d hand gesture recognition. IEEE Transactions on Circuits and Systems for Video Technology, 26(9):1659–1673, 2016.
[16] Z. Cheng and J. Shen. Just-for-me: An adaptive personalization system for location-aware social music recommendation. In Proceedings of international conference on multimedia retrieval, page 185. ACM, 2014.
[17] C. H. Chuan, E. Regina, and C. Guardino. American sign language recognition using leap motion sensor. In Proceedings of 13th International Conference on Machine Learning and Applications, pages 541–544, Dec 2014.
[18] B. D. Council. Eleven lessons. a study of the design process.
[19] K. Dev, K. Kim, N. Villar, and M. Lau. Improving style similarity metrics of 3d shapes. In Proceedings of the 42nd Graphics Interface Conference, pages 175–182. Canadian Information Processing Society, 2016.
[20] S. Duffner, S. Berlemont, G. Lefebvre, and C. Garcia. 3d gesture classification with convolutional neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 5432–5436. IEEE, 2014.
[21] R. Edwards and J. Holland. What is qualitative interviewing? A&C Black, 2013.
[22] N. El Aboudi and L. Benhlima. Review on wrapper feature selection approaches. In Proceedings of IEEE International Conference on Engineering & MIS, pages 1–5. IEEE, 2016.
[23] C. et al. Gesture recognition-based wireless intelligent judgment system, 2008.
[24] M. Fisher, D. Ritchie, M. Savva, T. Funkhouser, and P. Hanrahan. Examplebased synthesis of 3d object arrangements. ACM Transactions on Graphics (TOG), 31(6):135, 2012.
[25] K. Fukano, Y. Mochizuki, S. Iizuka, E. Simo-Serra, A. Sugimoto, and H. Ishikawa. Room reconstruction from a single spherical image by higherorder energy minimization. In Pattern Recognition (ICPR), 2016 23rd International Conference on, pages 1768–1773. IEEE, 2016.
[26] E. Garces, A. Agarwala, D. Gutierrez, and A. Hertzmann. A similarity measure for illustration style. ACM Transactions on Graphics (TOG), 33(4):93, 2014.
[27] E. Garces, A. Agarwala, A. Hertzmann, and D. Gutierrez. Style-based exploration of illustration datasets. Multimedia Tools and Applications, pages 1–20, 2016.
[28] K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber. Lstm: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10):2222–2232, 2017. [29] H. Guo, J. Wang, Y. Gao, J. Li, and H. Lu. Multi-view 3d object retrieval with deep embedding network. IEEE Transactions on Image Processing, 25(12): 5526–5537, 2016.
[30] R. Guo, C. Zou, and D. Hoiem. Predicting complete 3d models of indoor scenes. arXiv preprint arXiv:1504.02437, 2015.
[31] S. Ha and S. Choi. Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In International Joint Conference on Neural Networks, pages 381–388. IEEE, 2016.
[32] S. Ha, J.-M. Yun, and S. Choi. Multi-modal convolutional neural networks for activity recognition. In Systems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on, pages 3017–3022. IEEE, 2015.
[33] J. Han, L. Shao, D. Xu, and J. Shotton. Enhanced computer vision with Microsoft kinect sensor: A review. IEEE Transactions on Cybernetics, 43(5): 1318–1334, 2013.
[34] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006.
[35] M.-C. Hu, C.-W. Chen, W.-H. Cheng, C.-H. Chang, J.-H. Lai, and J.-L. Wu. Real-time human movement retrieval and assessment with kinect sensor. IEEE Transactions on Cybernetics, 45(4):742–753, 2015.
[36] M.-C. Hu, T.-Y. Pan, L.-Y. Lo, and H.-Y. Lo. Framework and method for creating virtual model of three-dimensional space. In Google Patent US9547943B2, 2017.
[37] Q.-X. Huang, H. Su, and L. Guibas. Fine-grained semi-supervised labeling of large shape collections. ACM Transactions on Graphics (TOG), 32(6):190, 2013.
[38] H. Izadinia, Q. Shan, and S. M. Seitz. Im2cad. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2422–2431. IEEE, 2017.
[39] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, pages 675–678. ACM, 2014.
[40] C.-F. Juang and K.-C. Ku. A recurrent fuzzy network for fuzzy temporal sequence processing and gesture recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 35(4):646–658, 2005.
[41] M. A. Keyvanrad and M. M. Homayounpour. A brief survey on deep belief networks and introducing a new object oriented matlab toolbox (deebnet v2. 1). arXiv preprint arXiv:1408.3264, 2014.
[42] K. Kiguchi and Y. Hayashi. An emg-based control for an upper-limb powerassist exoskeleton robot. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 42(4):1064–1071, 2012.
[43] K. Kiguchi, S. Kariya, K. Watanabe, K. Izumi, and T. Fukuda. An exoskeletal robot for human elbow motion support-sensor fusion, adaptation, and control. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 31(3):353–361, 2001.
[44] J.-S. Kim, W. Jang, and Z. Bien. A dynamic gesture recognition system for the korean sign language (ksl). IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 26(2):354–359, 1996.
[45] M. Kirbis and I. Kramberger. Mobile device for electronic eye gesture ecognition. IEEE Transactions on Consumer Electronics, 55(4), 2009.
[46] Y. Li. Hand gesture recognition using kinect. In Proceedings of IEEE International Conference on Computer Science and Automation Engineering, pages 196–199, June 2012.
[47] Y. Li, X. Chen, X. Zhang, K. Wang, and Z. J. Wang. A sign-componentbased framework for chinese sign language recognition using accelerometer and semg data. IEEE Transactions on Biomedical Engineering, 59(10):2695– 2704, 2012.
[48] Y. Li, D. Shi, B. Ding, and D. Liu. Unsupervised feature learning for human activity recognition using smartphone sensors. In Mining Intelligence and Knowledge Exploration, pages 99–107. Springer, 2014.
[49] I. Lim, A. Gehre, and L. Kobbelt. Identifying style of 3d shapes using deep metric learning. Computer Graphics Forum, 35(5):207–215, 2016.
[50] T. Liu, A. Hertzmann, W. Li, and T. Funkhouser. Style compatibility for 3d furniture models. ACM Transactions on Graphics (TOG), 34(4):85, 2015.
[51] C. Lo, Q. Cao, X. Zhu, and Z. Zhang. Gesture recognition system based on acceleration data for robocup referees. In Natural Computation, 2009. ICNC’09. Fifth International Conference on, volume 2, pages 149–153. IEEE, 2009.
[52] G. L. López, A. P. P. Negrón, A. D. A. Jiménez, J. R. Rodríguez, and R. I. Paredes. Comparative analysis of shape descriptors for 3d objects. Multimedia Tools and Applications, pages 1–48.
[53] Z. Lu, X. Chen, Q. Li, X. Zhang, and P. Zhou. A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices. IEEE Transactions on Human-Machine Systems, 44(2):293–299, 2014.
[54] Z. Lun, E. Kalogerakis, and A. Sheffer. Elements of style: Learning perceptual shape style similarity. ACM Transactions on Graphics (TOG), 34(4):84, 2015.
[55] Y. Luo, T. Liu, D. Tao, and C. Xu. Decomposition-based transfer distance metric learning for image classification. IEEE Transactions on Image Processing, 23(9):3789–3801, 2014.
[56] G. Marin, F. Dominio, and P. Zanuttigh. Hand gesture recognition with leap motion and kinect devices. In Proceedings of IEEE International Conference on Image Processing, pages 1565–1569, Oct 2014.
[57] M. Masoumi, C. Li, and A. B. Hamza. A spectral graph wavelet approach for nonrigid 3d shape retrieval. Pattern Recognition Letters, 83:339–348, 2016.
[58] T. Mei, B. Yang, X.-S. Hua, and S. Li. Contextual video recommendation by multimodal relevance and user feedback. ACM Transactions on Information Systems (TOIS), 29(2):10, 2011.
[59] P. Merrell, E. Schkufza, Z. Li, M. Agrawala, and V. Koltun. Interactive furniture layout using interior design guidelines. ACM Transactions on Graphics (TOG), 30(4):87, 2011.
[60] Y. Miao, L. Wang, C. Xie, and B. Zhang. Gesture recognition based on deep belief networks. In Chinese Conference on Biometric Recognition, pages 439–446. Springer, 2017.
[61] P. Muneesawang, N. M. Khan, M. Kyan, R. B. Elder, N. Dong, G. Sun, H. Li, L. Zhong, and L. Guan. A machine intelligence approach to virtual ballet training. IEEE MultiMedia, 22(4):80–92, 2015.
[62] P. O’Donovan, J. Lībeks, A. Agarwala, and A. Hertzmann. Exploratory font selection using crowdsourced attributes. ACM Transactions on Graphics (TOG), 33(4):92, 2014.
[63] F. J. Ordóñez and D. Roggen. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors, 16(1):115, 2016.
[64] T.-Y. Pan, L.-Y. Lo, C.-W. Yeh, J.-W. Li, H.-T. Liu, and M.-C. Hu. Real-time sign language recognition in complex background scene based on a hierarchical clustering classification method. In Proceedings of the 2nd International Conference on Multimedia Big Data, pages 64–67. IEEE, 2016.
[65] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko. Semisupervised learning with ladder networks. In Advances in Neural Information Processing Systems, pages 3546–3554, 2015.
[66] S. S. Rautaray and A. Agrawal. Vision based hand gesture recognition for human computer interaction: a survey. Artificial Intelligence Review, 43(1):1–54, 2015.
[67] B. Saleh, M. Dontcheva, A. Hertzmann, and Z. Liu. Learning style similarity for searching infographics. In Proceedings of the 41st Graphics Interface Conference, pages 59–64. Canadian Information Processing Society, 2015.
[68] D. Selmanaj, M. Corno, and S. M. Savaresi. Hazard detection for motorcycles via accelerometers: A self-organizing map approach. IEEE Transactions on Cybernetics, PP(99):1–12, 2016.
[69] S. Shin and W. Sung. Dynamic hand gesture recognition for wearable devices with low complexity recurrent neural networks. In Circuits and Systems (ISCAS), 2016 IEEE International Symposium on, pages 2274–2277. IEEE, 2016.
[70] M. Soleymani, S. Asghari-Esfeden, Y. Fu, and M. Pantic. Analysis of eeg signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Computing, 7(1):17–28, 2016.
[71] T. Starner, J. Weaver, and A. Pentland. Real-time american sign language recognition using desk and wearable computer based video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1371–1375, 1998.
[72] P. Trigueiros, F. Ribeiro, and L. P. Reis. Vision based referee sign language recognition system for the robocup msl league. In Robot Soccer World Cup, pages 360–372. Springer, 2013.
[73] P. Trigueiros, F. Ribeiro, and L. P. Reis. Hand gesture recognition system based in computer vision and machine learning. In Developments in Medical Image Processing and Computational Vision, pages 355–377. Springer, 2015.
[74] J. Vales-Alonso, D. Chaves-Diéguez, P. López-Matencio, J. J. Alcaraz, F. J. Parrado-García, and F. J. González-Castaño. Saeta: A smart coaching assistant for professional volleyball training. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 45(8):1138–1150, 2015.
[75] J. P. Wachs, M. Kölsch, H. Stern, and Y. Edan. Vision-based hand-gesture applications. Communications of the ACM, 54(2):60–71, 2011.
[76] C. Wang, Z. Liu, and S.-C. Chan. Superpixel-based hand gesture recognition with kinect depth camera. IEEE Transactions on Multimedia, 17(1):29–39, 2015.
[77] J. Wang, Y. Chen, S. Hao, X. Peng, and L. Hu. Deep learning for sensor-based activity recognition: A survey. arXiv preprint arXiv:1707.03502, 2017.
[78] S. B. Wang, A. Quattoni, L.-P. Morency, D. Demirdjian, and T. Darrell. Hidden conditional random fields for gesture recognition. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 1521–1527. IEEE, 2006.
[79] Y. Wang, H. Wang, and X. Li. An intelligent recommendation system model based on style for virtual home furnishing in three-dimensional scene. In Computational and Business Intelligence (ISCBI), 2013 International Symposium on, pages 213–216. IEEE, 2013.
[80] Z. Wang, M. Guo, and C. Zhao. Badminton stroke recognition based on body sensor networks. IEEE Transactions on Human-Machine Systems, 46(5):769–775, 2016.
[81] M. J. Wilber, I. S. Kwak, and S. J. Belongie. Cost-effective hits for relative similarity comparisons. In Second AAAI Conference on Human Computation and Crowdsourcing, 2014.
[82] J. Wu, L. Sun, and R. Jafari. A wearable system for recognizing American sign language in real-time using imu and surface emg sensors. IEEE Journal of Biomedical and Health Informatics, 20(5):1281–1290, 2016.
[83] R. Xie and J. Cao. Accelerometer-based hand gesture recognition by neural network and similarity matching. IEEE Sensors Journal, 16(11):4537–4545, 2016.
[84] R. Xie, X. Sun, X. Xia, and J. Cao. Similarity matching-based extensible hand gesture recognition. IEEE Sensors Journal, 15(6):3475–3483, 2015.
[85] K. Xu, H. Li, H. Zhang, D. Cohen-Or, Y. Xiong, and Z.-Q. Cheng. Stylecontent separation by anisotropic part scales. ACM Transactions on Graphics (TOG), 29(6):184, 2010.
[86] S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelzaher. Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In Proceedings of the 26th International Conference on World Wide Web, pages 351–360. International World Wide Web Conferences Steering Committee, 2017.
[87] A. J. Young, L. H. Smith, E. J. Rouse, and L. J. Hargrove. Classification of simultaneous movements using surface emg pattern recognition. IEEE Transactions on Biomedical Engineering, 60(5):1250–1258, 2013.
[88] L.-F. Yu, S.-K. Yeung, C.-K. Tang, D. Terzopoulos, T. F. Chan, and S. J. Osher. Make it home: automatic optimization of furniture arrangement. 30(4):86, 2011.
[89] M. E. Yumer, S. Chaudhuri, J. K. Hodgins, and L. B. Kara. Semantic shape editing using deformation handles. ACM Transactions on Graphics (TOG), 34(4):86, 2015.
[90] X. Zhang, X. Chen, Y. Li, V. Lantz, K. Wang, and J. Yang. A framework for hand gesture recognition based on accelerometer and emg sensors. IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 41(6):1064–1076, 2011.
[91] Y. Zhang, G. Pan, K. Jia, M. Lu, Y. Wang, and Z. Wu. Accelerometer-based gait recognition by sparse representation of signature points with clusters. IEEE Transactions on Cybernetics, 45(9):1864–1875, 2015.
[92] Y. Zhang, S. Song, P. Tan, and J. Xiao. Panocontext: A whole-room 3d context model for panoramic scene understanding. In European Conference on Computer Vision, pages 668–686. Springer, 2014.
[93] Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao. Time series classification using multi-channels deep convolutional neural networks. In International Conference on Web-Age Information Management, pages 298–310. Springer, 2014.
[94] J. Zhu, Y. Guo, and H. Ma. A data-driven approach for furniture and indoor scene colorization. IEEE transactions on visualization and computer graphics, 2017.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2023-08-01起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw