進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-1611201902310800
論文名稱(中文) 利用深度學習對動態心血管磁共振影像做右心室分割
論文名稱(英文) Right Ventricle Segmentation from Cine Cardiac MRI Using Deep Learning
校院名稱 成功大學
系所名稱(中) 資訊工程學系
系所名稱(英) Institute of Computer Science and Information Engineering
學年度 108
學期 1
出版年 108
研究生(中文) 許瀚中
研究生(英文) Han-Chung Hsu
學號 P76064407
學位類別 碩士
語文別 英文
論文頁數 79頁
口試委員 指導教授-吳明龍
口試委員-趙梓程
口試委員-莊子肇
中文關鍵字 心臟動態磁振造影  自動切割  深度學習  高斯混合模型  右心室  U-net  MICCAI 2012 
英文關鍵字 cine cardiac magnetic resonance imaging  automatic segmentation  deep learning  Gaussian mixture model  right ventricle  U-net  MICCAI 2012 
學科別分類
中文摘要 目的: 動態心血管磁共振成像用於診斷多種心血管疾病,影像中右心室結構與功能可以評估多項疾病的進展。為了計算相關的心臟功能性指標,必須先從影像中劃分出右心室區塊,然而人工分割右心室區塊相當耗時且仰賴高度專業能力。基於近期卷積神經網路的大量研究與應用,本計畫目標實作自動化切割右心室影像的工具,並透過深度學習來達成目標。基於 Inception 模型與 ResNet 的優勢,我們修改了 U-net 架構來降低模型中的參數量並且改進分割的準確度。
方法: 我們採用 MICCAI 2012 右心室切割挑戰的資料庫,共有16位受測者資料用於訓練,16位用於測試集1,以及16位用於測試集2。由於動態心臟影像中只有心臟及周圍組織會移動,透過時間域的快速傅立葉變換即可獲取心臟的動態資訊。諧波影像的幾何中心點約略落在心室中隔,我們以此為中心裁切出心臟影像作為關注區域,以用於模型訓練與驗證。在訓練階段,我們採用交叉驗證來比較不同模型的表現。最後我們提出 Residual U-Inception Net (RUIN) 監督式學習模型,能自動化分割右心室區域。接著我們將預測區域轉換為輪廓資料,以利後續計算右心室功能性指標。除此之外,心臟影像的像素值能劃分出背景、心肌、血液與脂肪三種區域,我們在像素值分布上採用高斯混合模型以提取影像特徵,並比較不同輸入對於模型的表現影響。
結果: 我們提出的 RUIN 模型在測試集1達到 0.82 ± 0.23 Dice係數與 8.41 ± 8.77 Hausdorff 距離,在測試集2達到 0.85 ± 0.21 Dice 係數與 7.06 ± 8.06 Hausdorff 距離的表現。交叉驗證的結果顯示,利用前處理過的影像作為輸入,能達到 0.83 ± 0.21 Dice係數;利用高斯混合模型提取的特徵作為輸入,則只能達到 0.81 ± 0.22 Dice 係數。 RUIN 模型比 U-net 要少 39% 參數量,以及 76% 的預測時間。
結論: 比較其他自動化切割右心室的方法,我們的深度學習框架有潛力成為現有的最佳技術之。然而,由於不同專家的判斷變異程度為 0.90 ± 0.10 Dice 係數,右心室切割仍然是值得繼續挑戰的問題。
英文摘要 Purpose: Cine cardiac magnetic resonance imaging (MRI) is helpful for diagnosing various cardiac disorders. The structure and function of right ventricle (RV) is one of the importance to manage and evaluate progress in such diseases. To calculate cardiac functional parameters, the segmentation of RV cavity is essential to apply to MR images. However, manually segmentation is time consuming and subjective to highly professional knowledge. Due to the recent well development in convolutional neural network, this study is to present an automatic segmentation framework through deep learning methods. We modified the U-net architecture based on the advantage of Inception model and ResNet to reduce the parameters in the network as well as improve the segmentation accuracy.
Method: We used MICCAI 2012 Right Ventricle Segmentation Challenge (RVSC) dataset, which contained 16 patients for training, 16 patients for testing dataset 1 and 16 patients for testing dataset 2. Because of the motion of heart and surrounding tissues in cine cardiac MRI, we employed fast Fourier transform (FFT) over time axis to get the dynamic information. The centroid in such harmonic images was roughly around the center of interventricular septum. Based on the center of mass, we selected region of interest (ROI) to contain the RV for training and testing. During training phase, we used cross validation to evaluate the performance of different models. Finally, we purposed a novel supervised model, Residual U-Inception Net (RUIN), to perform automatic segmentation of the right ventricle. After predicting the right ventricle cavity, we transformed the predicted mask into contour data for follow-up evaluation of cardiac functional parameters. Since the intensity in cine cardiac MR images could be clustering into three classes, namely background space, myocardium, and blood/fat, we utilized the distribution of pixel values and Gaussian mixture model (GMM) for feature extraction. Then we compare the impact of different inputs for our model.
Result: The purposed RUIN architecture achieved 0.82 ± 0.23 Dice metric (DM) and 8.41 ± 8.77 Hausdorff distance (HD) for testing dataset 1; 0.85 ± 0.21 DM and 7.06 ± 8.06 HD for testing dataset 2. The cross validation for RUIN achieved 0.83 ± 0.21 DM using preprocessed image as input, whereas the GMM feature input got only 0.81 ± 0.22 DM. The RUIN model had 39% less parameters and decreases 76% prediction time compared to U-net model.
Conclusion: Compared to other automatic right ventricle segmentation methods, our deep learning framework has potential to be one of the state-of-the-art techniques. However, the right ventricle segmentation is still a challenging task since the inter-expert variability is 0.90 ± 0.10 DM.
論文目次 Abstract i
中文摘要 ii
誌謝 iii
Contents iv
Chapter 1 Introduction 1
Chapter 2 Materials and Methods 4
Overview 4
Dataset 4
MICCAI 2012 right ventricle segmentation challenge 4
b-SSFP 6
Preprocessing 8
Fourier transform and ROI Detection 8
Normalization 12
Augmentation 13
Ground truth mask generating 14
Machine learning model 16
Gaussian mixture model 16
U-net 20
Inception 22
ResNet 23
RUIN 25
Optimization 27
Loss function 27
Backpropagation 28
Adam optimization 30
Stop criteria 31
Hyperparameter and feature channels 31
Cross validation and final submission 31
Postprocessing 32
Evaluation 33
Technical performance 33
Clinical performance 34
Experiment environment 35
Chapter 3 Result 36
Parameters 36
Cross validation 37
Overview 37
Using preprocessed images as input 38
Using GMM features as input 46
Results of the testing datasets 48
Related work 48
Testing dataset 1 50
Testing dataset 2 52
Average of testing datasets 55
Clinical performance 56
Summary 59
Chapter 4 Discussion 60
Feature map 60
ImageSet1 60
ImageSet2 62
Summary 66
Cropping 66
Cropping different size while testing 66
Failure of cropping 67
Data augmentation 69
Loss function 70
Stop criteria 70
Hyperparameters 71
GMM feature 73
Using GMM as segmentation method 73
GMM fitting procedure 73
Chapter 5 Conclusion 75
Reference 76
參考文獻 [1] Petitjean C, Zuluaga MA, Bai W, Dacher JN, Grosgeorge D, Caudron J, et al. Right ventricle segmentation from cardiac MRI: a collation study. Med Image Anal 2015;19(1):187-202.
[2] Caudron J, Fares J, Lefebvre V, Vivier PH, Petitjean C, Dacher JN. Cardiac MRI assessment of right ventricular function in acquired heart disease: factors of variability. Acad Radiol 2012;19(8):991-1002.
[3] Bishop CM. Pattern recognition and machine learning. springer; 2006.
[4] Hu H, Liu H, Gao Z, Huang L. Hybrid segmentation of left ventricle in cardiac MRI using Gaussian-mixture model and region restricted dynamic programming. Magn Reson Imaging 2013;31(4):575-84.
[5] Scheffler K, Lehnhardt S. Principles and applications of balanced SSFP techniques. Eur Radiol 2003;13(11):2409-18.
[6] Schmidhuber J. Deep learning in neural networks: An overview. Neural networks 2015;61:85-117.
[7] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. Lake Tahoe, Nevada: Curran Associates Inc.; 2012:1097-105.
[8] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention. Springer; 2015:234-41.
[9] Amodei D, Ananthanarayanan S, Anubhai R, Bai J, Battenberg E, Case C, et al. Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin. In: Maria Florina B, Kilian QW, eds. Proceedings of The 33rd International Conference on Machine Learning. 48. Proceedings of Machine Learning Research: PMLR; 2016:173—82.
[10] LeCun Y, Bengio Y, Hinton G. Deep learning. nature 2015;521(7553):436-44.
[11] Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Medical Image Analysis 2017;42:60-88.
[12] Cireşan D, Meier U, Masci J, Schmidhuber J. Multi-column deep neural network for traffic sign classification. Neural Networks 2012;32:333-8.
[13] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015:3431-40.
[14] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition. 2015:1-9.
[15] Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. Lille, France: JMLR.org; 2015:448-56.
[16] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:2818-26.
[17] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:770-8.
[18] Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-First AAAI Conference on Artificial Intelligence. 2017.
[19] Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 2014;15(1):1929-58.
[20] Lin X, Cowan BR, Young AA. Automated Detection of Left Ventricle in 4D MR Images: Experience from a Large Study. Berlin, Heidelberg: Springer Berlin Heidelberg; 2006:728-35.
[21] Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980 2014.
[22] Li J, Yu Z, Gu Z, Liu H, Li Y. Dilated-Inception Net: Multi-Scale Feature Aggregation for Cardiac Right Ventricle Segmentation. IEEE Trans Biomed Eng 2019.
[23] Bernstein MA, King KF, Zhou XJ. Handbook of MRI pulse sequences. Elsevier; 2004.
[24] Stanisz GJ, Odrobina EE, Pun J, Escaravage M, Graham SJ, Bronskill MJ, et al. T1, T2 relaxation and magnetization transfer in tissue at 3T. Magn Reson Med 2005;54(3):507-12.
[25] Bottomley PA, Foster TH, Argersinger RE, Pfeifer LM. A review of normal tissue hydrogen NMR relaxation times and relaxation mechanisms from 1-100 MHz: dependence on tissue type, NMR frequency, temperature, species, excision, and age. Med Phys 1984;11(4):425-48.
[26] Gonzalez RC, Woods RE. Digital image processing. Prentice hall Upper Saddle River, NJ; 2002.
[27] National Electrical Manufacturers Association. Digital Imaging and Communications in Medicine (DICOM) Standard; Available from: http://medical.nema.org/. [Accessed 10/06 2019].
[28] Milletari F, Navab N, Ahmadi S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. 2016 Fourth International Conference on 3D Vision (3DV). IEEE; 2016:565-71.
[29] Isola P, Zhu J-Y, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017:1125-34.
[30] Ciresan D, Giusti A, Gambardella LM, Schmidhuber J. Deep neural networks segment neuronal membranes in electron microscopy images. Advances in neural information processing systems. 2012:2843-51.
[31] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE 1998;86(11):2278-324.
[32] Lin M, Chen Q, Yan S. Network in network. arXiv preprint arXiv:13124400 2013.
[33] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556 2014.
[34] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010:249-56.
[35] Grosgeorge D, Petitjean C, Dacher JN, Ruan S. Graph cut segmentation with a statistical shape model in cardiac MRI. Computer Vision and Image Understanding 2013;117(9):1027-35.
[36] Maier OMO, Jiménez D, Santos A, Ledesma-Carbayo MJ. Segmentation of RV in 4D cardiac MR volumes using region-merging graph cuts. 2012 Computing in Cardiology. 2012:697-700.
[37] Guo Z, Tan W, Wang L, Xu L, Wang X, Yang B, et al. Local Motion Intensity Clustering (LMIC) Model for Segmentation of Right Ventricle in Cardiac MRI Images. IEEE Journal of Biomedical and Health Informatics 2019;23(2):723-30.
[38] Zuluaga MA, Cardoso MJ, Modat M, Ourselin S. Multi-atlas Propagation Whole Heart Segmentation from MRI and CTA Using a Local Normalised Correlation Coefficient Criterion. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013:174-81.
[39] Ringenberg J, Deo M, Devabhaktuni V, Berenfeld O, Boyers P, Gold J. Fast, accurate, and fully automatic segmentation of the right ventricle in short-axis cardiac MRI. Comput Med Imaging Graph 2014;38(3):190-201.
[40] Tran PV. A fully convolutional neural network for cardiac segmentation in short-axis MRI. arXiv preprint arXiv:160400494 2016.
[41] Avendi MR, Kheradvar A, Jafarkhani H. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach. Magn Reson Med 2017;78(6):2439-48.
[42] Ji Z, Xia Y, Zheng Y. Robust generative asymmetric GMM for brain MR image segmentation. Comput Methods Programs Biomed 2017;151:123-38
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2024-11-01起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2024-11-01起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw