進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-2508202015550600
論文名稱(中文) 集成學習框架於糖尿病視網膜病變之分類與診斷
論文名稱(英文) An Ensemble Learning Framework for Diabetic Retinopathy Classification and Diagnosis
校院名稱 成功大學
系所名稱(中) 醫學資訊研究所
系所名稱(英) Institute of Medical Informatics
學年度 108
學期 2
出版年 109
研究生(中文) 鄭文涵
研究生(英文) Wen-Han Zheng
學號 Q56074077
學位類別 碩士
語文別 英文
論文頁數 44頁
口試委員 指導教授-王士豪
口試委員-連震杰
口試委員-吳明龍
口試委員-許聖民
口試委員-邱南津
中文關鍵字 糖尿病視網膜病變  彩色眼底影像  深度學習  集成模型  多樣化 
英文關鍵字 diabetic retinopathy  color fundus images  deep learning  ensemble model  diversity 
學科別分類
中文摘要 糖尿病引起的糖尿病視網膜病變是造成失明的主因之一。隨著糖尿病患者的病齡漸增,發展出糖尿病視網膜病變的機會也越大。定期進行眼底檢查並提早介入治療是控制該疾病最有效的方法。而大量的糖尿病患者及其龐大的篩檢需求也引起了人們對於開發計算機輔助診斷系統的興趣。近年來,深度神經網路在各領域帶來了許多突破,其中包括醫學影像的分析。為了能夠加速診斷糖尿病視網膜病變,已經出現了許多利用深度學習自動化檢測疾病的方法,但這些方法都不能準確的對5個階段進行分類,也沒有人針對該疾病討論挑選模型的依據。因此,本研究利用集成模型來提高糖尿病視網膜病變各階段分類的準確性,並提出了模型選擇算法,藉由評估各種性能和多樣化指標,挑選出一組模型並將它們組合在一起。實驗結果表明,基於準確性與多樣化的集成模型具有較高的召回率,並能夠利用較少的模型達到更好的分類性能。
英文摘要 Diabetic retinopathy caused by diabetes mellitus is one of the prime reasons for blindness. The longer a patient has diabetes, the greater the chance of developing diabetic retinopathy. Regular fundus examination and early intervention are the most effective ways to control the disease. The large number of diabetic patients and their huge demand for screening has aroused interest in the development of computer-aided diagnostic systems for this purpose. In recent years, deep neural networks have brought many breakthroughs in various fields, including the analysis of medical images. In order to accelerate the diagnosis of diabetic retinopathy, there have been many methods that use deep learning developed to automatically detect diseases. However, these methods do not accurately classify each of the five stages of diabetic retinopathy, and no one has discussed the basis for selecting models for this disease. Therefore, in this study, an ensemble model was used to improve classification accuracy for diabetic retinopathy at each stage. We also proposed a model selection algorithm, where by evaluating indicators such as performance and diversity, a set of models can be selected and combined. The experimental results showed that the ensemble model based on accuracy and diversity has better recall, where fewer models can be used to achieve improved classification performance.
論文目次 摘要 I
ABSTRACT II
CONTENT III
LIST OF FIGURES V
LIST OF TABLES VI
CHAPTER 1. INTRODUCTION 1
1.1 Diabetic Retinopathy 1
1.2 Screening for Diabetic Retinopathy 3
1.3 Deep Learning and Medical Image Analysis 4
1.4 Related Works 4
1.5 Motivations and Objectives 5
CHAPTER 2. BACKGROUND 6
2.1 Fundus Photography 6
2.2 Convolutional Neural Network 7
2.2.1 Convolution Layer 7
2.2.2 Pooling Layer 8
2.2.3 Fully Connected Layer 9
2.2.4 Softmax Layer 9
2.3 Ensemble Learning 10
CHAPTER 3. MATERIALS AND METHODS 11
3.1 Datasets 11
3.1.1 EyePACS – Kaggle’s DR Detection Challenge 11
3.1.2 DiaretDB1 – Standard DR Database Calibration Level 11
3.2 Data Preprocessing 12
3.2.1 Size Normalization 12
3.2.2 Data Cleaning 13
3.2.3 Data Augmentation 14
3.3 Convolutional Neural Network Models 15
3.3.1 Pre-trained Models 15
3.3.2 Computing Environment 18
3.3.3 Training Parameters 18
3.4 Visualization Attention – Class Activation Map 19
3.5 Ensemble Framework 20
3.5.1 Lesion Detection on the DiaretDB1 Dataset 20
3.5.2 Evaluate Performance on the EyePACS Dataset 21
3.5.3 Model Selection Algorithm 21
3.5.4 Ensemble Model 23
CHAPTER 4. RESULTS AND DISCUSSION 24
4.1 Lesion Detection Performance 24
4.2 Model Classification Performance 27
4.3 Pearson Correlation Coefficient 37
4.4 Ensemble Model Performance 38
CHAPTER 5. CONCLUSIONS 42
CHAPTER 6. FUTURE WORKS 42
REFERENCE 44
參考文獻 [1] IFD, IDF Diabetes Atlas, 9th edn. Brussels, Belgium: International Diabetes Federation 2019. 2019.
[2] Organization, W.H., Prevention of blindness from diabetes mellitus: report of a WHO consultation in Geneva, Switzerland, 9-11 November 2005. 2006: World Health Organization.
[3] Klein, R., et al., The 25-year incidence of visual impairment in type 1 diabetes mellitus: the Wisconsin Epidemiologic Study of Diabetic Retinopathy. Ophthalmology, 2010. 117(1): p. 63-70.
[4] Weingeist, T., T.J. Liesegang, and M.G. Grand, Basic and clinical science course. 1998: American Academy of Ophthalmology.
[5] Olver, J., et al., Ophthalmology at a Glance. 2014: John Wiley & Sons.
[6] Level, S., D. Ophthalmoscopy, and E. Levels, International clinical diabetic retinopathy disease severity scale detailed table. 2002.
[7] McCulloch, D.K. and J. Trobe, Diabetic retinopathy: Screening. Waltham (MA): UpToDate, 2015.
[8] Mohamed, Q., M.C. Gillies, and T.Y. Wong, Management of diabetic retinopathy: a systematic review. Jama, 2007. 298(8): p. 902-916.
[9] Javitt, J.C., et al., Preventive eye care in people with diabetes is cost-saving to the federal government: implications for health-care reform. Diabetes care, 1994. 17(8): p. 909-917.
[10] Javitt, J.C., et al., Detecting and treating retinopathy in patients with type I diabetes mellitus: a health policy model. Ophthalmology, 1990. 97(4): p. 483-495.
[11] Williams, G.A., et al., Single-field fundus photography for diabetic retinopathy screening: a report by the American Academy of Ophthalmology. Ophthalmology, 2004. 111(5): p. 1055-1062.
[12] Abramoff, M.D. and M.S. Suttorp-Schulten, Web-based screening for diabetic retinopathy in a primary care population: the EyeCheck project. Telemedicine Journal & e-Health, 2005. 11(6): p. 668-674.
[13] Farley, T.F., et al., Accuracy of primary care clinicians in screening for diabetic retinopathy using single-image retinal photography. The Annals of Family Medicine, 2008. 6(5): p. 428-434.
[14] Lytras, M.D. and A. Sarirete, Innovation in Health Informatics: A Smart Healthcare Primer. 2019: Academic Press.
[15] Gondal, W.M., et al. Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. in 2017 IEEE international conference on image processing (ICIP). 2017. IEEE.
[16] Wang, Z., et al. Zoom-in-net: Deep mining lesions for diabetic retinopathy detection. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2017. Springer.
[17] Pratt, H., et al., Convolutional neural networks for diabetic retinopathy. Procedia Computer Science, 2016. 90: p. 200-205.
[18] Hsieh, Y.-T., et al., Application of deep learning image assessment software VeriSee™ for diabetic retinopathy screening. Journal of the Formosan Medical Association, 2020.
[19] Qummar, S., et al., A deep learning ensemble approach for diabetic retinopathy detection. IEEE Access, 2019. 7: p. 150530-150539.
[20] Zhou, Z.-H., J. Wu, and W. Tang, Ensembling neural networks: many could be better than all. Artificial intelligence, 2002. 137(1-2): p. 239-263.
[21] Mookiah, M.R.K., et al., Computer-aided diagnosis of diabetic retinopathy: A review. Computers in biology and medicine, 2013. 43(12): p. 2136-2155.
[22] Abràmoff, M.D., M.K. Garvin, and M. Sonka, Retinal imaging and image analysis. IEEE reviews in biomedical engineering, 2010. 3: p. 169-208.
[23] Albelwi, S. and A. Mahmood, A framework for designing the architectures of deep convolutional neural networks. Entropy, 2017. 19(6): p. 242.
[24] Graham, B., Kaggle diabetic retinopathy detection competition report. University of Warwick, 2015.
[25] Kauppi, T., et al. The diaretdb1 diabetic retinopathy database and evaluation protocol. in BMVC. 2007.
[26] Raj, A., A.K. Tiwari, and M.G. Martini, Fundus image quality assessment: survey, challenges, and future scope. IET Image Processing, 2019. 13(8): p. 1211-1224.
[27] Fu, H., et al. Evaluation of retinal image quality assessment networks in different color-spaces. in International Conference on Medical Image Computing and Computer-Assisted Intervention. 2019. Springer.
[28] Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.
[29] Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[30] Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
[31] Szegedy, C., et al. Rethinking the inception architecture for computer vision. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[32] He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[33] Chollet, F. Xception: Deep learning with depthwise separable convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[34] Huang, G., et al. Densely connected convolutional networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[35] Szegedy, C., et al. Inception-v4, inception-resnet and the impact of residual connections on learning. in Thirty-first AAAI conference on artificial intelligence. 2017.
[36] Howard, A.G., et al., Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
[37] Zoph, B., et al. Learning transferable architectures for scalable image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[38] Zhou, B., et al. Learning deep features for discriminative localization. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[39] Akosa, J. Predictive accuracy: A misleading performance measure for highly imbalanced data. in Proceedings of the SAS Global Forum. 2017.
[40] Wong, S.C., et al. Understanding data augmentation for classification: when to warp? in 2016 international conference on digital image computing: techniques and applications (DICTA). 2016. IEEE.
[41] O'Gara, S. and K. McGuinness, Comparing Data Augmentation Strategies for Deep Image Classification. 2019.
[42] Karimi, D., et al., Deep learning with noisy labels: exploring techniques and remedies in medical image analysis. Medical Image Analysis, 2020: p. 101759.
[43] Northcutt, C.G., L. Jiang, and I.L. Chuang, Confident learning: Estimating uncertainty in dataset labels. arXiv preprint arXiv:1911.00068, 2019.
[44] Hendrycks, D., et al. Using trusted data to train deep networks on labels corrupted by severe noise. in Advances in neural information processing systems. 2018.
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2025-07-22起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2025-07-22起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw