進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-1408201914204900
論文名稱(中文) 以學習多視角表示為目的地對抗式生成多角度口腔影像
論文名稱(英文) Learning Complete Representation for Multi-view Oral Image Generation with Generative Adversarial Networks
校院名稱 成功大學
系所名稱(中) 電腦與通信工程研究所
系所名稱(英) Institute of Computer & Communication
學年度 107
學期 2
出版年 108
研究生(中文) 詹育閔
研究生(英文) Yu-Min Chan
學號 Q36051596
學位類別 碩士
語文別 英文
論文頁數 80頁
口試委員 指導教授-詹寶珠
口試委員-陳智揚
口試委員-黃則達
口試委員-曾盛豪
口試委員-張建禕
中文關鍵字 口腔癌  自體螢光影像  二次判別分析  生成對抗網路  深度學習  卷積神經網路 
英文關鍵字 oral cancer  autofluorescence image  QDA  GAN  multi-view image  convolutional network 
學科別分類
中文摘要 癌症是世界各地成年人口死亡的主要原因。事實上,2016年全球癌症死亡人數超過890萬人,每年約有5萬名美國人被診斷患有口腔癌,其中只有一半以上存活超過5年。許多研究報導,與晚期檢測相比,早期口腔癌的診斷和治療可以使預期壽命大大增加。為了在早期發現口病變,許多口腔癌篩檢的方式被提出,其中光學快速且非侵入式之方法最適合用於初步篩檢。其原理為利用口腔細胞自體螢光和型態上的變化來做為判別的依據。研究表明,口腔癌細胞在特定頻帶激發下會發出螢光物質。通過利用該特徵,可以通過捕獲口腔螢光圖像來評估口腔癌的潛在風險。因此,現有的用於口腔癌檢測的圖像處理方法分析自發熒光圖像的特徵,並使用二次判別分析(QDA)分類器將數據分類為癌症或非癌症。此外,由於訓練數據集通常僅包括單視圖,因此經常發生QDA分類錯誤。因此,該研究提出了用於從單視圖圖像學習多視圖圖像的生成性對抗網絡(GAN)模型。然後,使用生成的多視圖圖像來對分類器進行重新驗證,從而提高QDA分類器的準確性。因此,重建的多角度視圖口腔圖像實際上為將來的口腔篩查提供了更有效和安全的分類結果。
英文摘要 Research shows that oral cancer cells emit fluorescent substances under excitation at specific frequency bands. By exploiting this feature, it is possible to evaluate the potential existence of oral cancer by capturing oral fluorescence images. Accordingly, existing image-processing methods for oral cancer detection analyzes the features of the autofluorescence image and uses a quadratic discriminant analysis (QDA) classifier to classify the data as either cancer or non-cancer. QDA requires a large volume of training data to achieve a satisfactory accuracy rate. However it is difficult to collect oral cavity images from patients. Furthermore, QDA classification errors often occur since the training data set usually consist of just single-view image. Accordingly, this study proposes a Generative Adversarial Network (GAN) model for learning multiple view images from single view image. The generated multiple-view images are then used to re-classify, thereby improving the accuracy if the QDA classifier. As a result, the reconstructed multiple angle view oral images actually provide more effective and safety classification result for oral screening in the future.
論文目次 摘 要 I
Abstract III
Table of Content VI
List of Tables VII
List of Figures IX
Chapter 1 Introduction 1
Chapter 2 Related Works 5
Chapter 3 Materials and Methods 8
3.1 Oral Cancer Detection System Using Autofluorescence Images 8
3.1.1 Oral Cancer Detection Algorithm 8
3.1.2 Oral Autofluorescence Imaging Device 10
3.2 Classifier Using Texture Features 12
3.2.1 Quadratic Discriminant Analysis Classifier 12
3.2.2 2D Discrete Wavelet Transformation on Image 14
3.2.3 Fractional Brownian Motion Model on Image 18
3.3 Multi-view Generation Model 20
3.3.1 Overview 20
3.3.2 Generator 21
3.3.3 Discriminator 26
3.4 Loss Function 31
Chapter 4 Experimental Results and Discussions 35
4.1 Evaluation Criteria 35
4.2 Different Angle View Texture and Intensity Features in QDA Classifier 37
4.3 Synthetic Multiple Angle View Oral images Reconstructed by Model 50
4.3.1 Implementation details 50
4.3.2 Autofluorescence Oral Image Multiple view Synthesis 51
4.3.3 Global and local passed path on the impact of model 54
4.4 Synthetic Multiple Angle View oral image in QDA Classifier 60
Chapter 5 Conclusion 76
Reference 77
參考文獻 [1] Global Health Metrics, “Global, regional, and national age-sex specific mortality for 264 causes of death, 1980-2016: a systematic analysis for the Global Burden of Disease Study 2016,” Global Health Metrics, vol. 390, no. 10100, pp. 1151-1210, Sep. 2017.
[2] The Oral Cancer Foundation. (2019) Rates of occurrence in the United States. [Online]. https://oralcancerfoundation.org/facts/
[3] Cancer.Net Editorial Board. (2019) Oral and Oropharyngeal Cancer: Statistics. [Online]. https://www.cancer.net/cancer-types/oral-and-oropharyngeal-cancer/statistics
[4] S. L. Avon et al., “Oral Soft-Tissue Biopsy: An Overview,” J. Can. Dent. Assoc., 2012.
[5] M. Palliyalil et al., “Histological Assessment of the Carotid Sheath in Patients With Oral Squamous Cell Carcinoma,” J. Oral Maxillofac. Surg., vol. 75, no 11, pp. 2465-2476, Nov. 2017.
[6] Naugler C. (2008) Practice tips. Brush biopsy sampling of oral lesions. Canadian family physician Medecin de famille canadien. [Online]. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2278308/
[7] J. Martin. (2017) New Saliva Test For Oral Cancer. [Online]. https://www.deardoctor.com/articles/new-saliva-test-for-oral-cancer/
[8] E. Allegra et al., “The usefulness of toluidine staining as a diagnostic tool for precancerous and cancerous oropharyngeal and oral cavity lesions,” Acta. Otorhinolaryngol Ital., vol. 29, no 4, pp. 187-190, 2009.
[9] E. Svistun et al., “Vision enhancement system for detection of oral cavity neoplasia based on autofluorescence,” Head Neck, vol. 26, no. 3, Mar. 2004.
[10] I. Pavlova et al., “Understanding the Biological Basis of Autofluorescence Imaging for Oral Cancer Detection: High-Resolution Fluorescence Microscopy in Viable Tissue,” Clin. Cancer Res., vol. 14, no. 8, Apr. 2008.
[11] Association of Dental Support Organizations. “The role of dental support organizations in an evolving profession; the executive summary of a recent ADSO report on changes in dentistry,” Inside Dentistry. Feb. 2015;
[12] T.-Y. Huang, “Application of Autofluorescence Imaging On Clinical Oral Cancer Screening,” National Cheng kung University, 2014.
[13] L. Z. Li., “Quantitative magnetic resonance and optical imaging biomarkers of melanoma metastatic potential,” Proc. Natl. Acad. Sci. U S A., vol. 106, no. 16, Apr. 2009.
[14] R. Gross et al., “Multi-pie,” Image Vision Computing, 2010.
[15] I. J. Goodfellow et al., “Generative Adversarial Networks,” In NIPS, 2014.
[16] M. Mirza, “Conditional Generative Adversarial Nets,” Computer Vision and Pattern Recognition, Nov. 2014
[17] L. Tran et al., “Disentangled Representation Learning GAN for Pose-Invariant Face Recognition,” 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017.
[18] R. Huang, “Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis,” Computer Vision and Pattern Recognition, Apr. 2017.
[19] P. Kruizinga et al. “Nonlinear operator for oriented texture,” IEEE Transactions on image processing, vol. 8, no. 10, pp. 1395-1407, Oct. 1999.
[20] S. Loffe, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Machine Learning, Feb. 2015
[21] M. Abadi et al., “Tensorflow: A system for large-scale machine learning,” Operating Systems Design and Implementation (OSDI), pp. 265–283, Nov. 2016.
[22] L. Mescheder, “Which Training Methods for GANs do actually Converge?,” International Conference on Machine Learning 2018, Jan. 2018.
[23] D. P. Kingma et al., “Adam: A Method for Stochastic Optimization,” the 3rd International Conference for Learning Representations, Dec. 2014.
[24] S. Hochreiter, “Long Short-term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[25] J. Deng et al., “ImageNet: a Large-Scale Hierarchical Image Database,” 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), Jun. 2009.
[26] J. Schmidhuber, “Deep Learning in Neural Networks: An Overview,” Neural Networks, vol. 61, pp. 85-117, Jan 2015.
[27] P. Baldi, “Autoencoders, Unsupervised Learning, and Deep Architectures,” Proceedings of ICML Workshop on Unsupervised and Transfer Learning, 2012.
[28] D. P. Kingma et al., “Autoencoding Variational Bayes,” Proceedings of the International Conference on Learning Representations (ICLR), Dec. 2013.
[29] Y. Bengio, “Generalized Denoising Autoencoders as Generative Models,” In NIPS26. Nips Foundation, May 2013.
[30] K. He et al., “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Dec. 2015.
[31] K.Gregor et al, “DRAW: A Recurrent Neural Network For Image Generation,” Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 1462-1471, 2015.
[32] C. Ledig, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Sep. 2016.
[33] V. Jayaprakash, “Autofluorescence-Guided Surveillance for Oral Cancer,” Cancer Prevention research (Philadelphia, Pa.), vol. 2, no. 11, pp. 966-974, 2009.
[34] S. Srivastava, “Bayesian Quadratic Discriminant Analysis,” Journal of Machine Learning Research, vol. 8, pp. 1277-1305, Jun. 2007.
[35] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research (JMLR), vol. 12, pp. 2825-2830, 2011.
[36] M.-H. Chang et al., “Texture Feature Analysis for Oral Cancer Detection,” National Cheng kung University, Jul. 2016.
[37] Health Promotion Administration, “Ministry of Health and Welfare”, Taiwan. Cancer Prevention and Control. Available at: [Online].
https://www.hpa.gov.tw/Pages/Detail.aspx?nodeid=613&pid=1118
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2024-09-22起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2024-09-22起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw