進階搜尋


   電子論文尚未授權公開,紙本請查館藏目錄
(※如查詢不到或館藏狀況顯示「閉架不公開」,表示該本論文不在書庫,無法取用。)
系統識別號 U0026-2707202016240200
論文名稱(中文) 使用機器學習尋找Slater-Koster方法的參數
論文名稱(英文) Use Machine Learning to Find Slater-Koster Method's Parameters
校院名稱 成功大學
系所名稱(中) 物理學系
系所名稱(英) Department of Physics
學年度 108
學期 2
出版年 109
研究生(中文) 黃世豪
研究生(英文) Shih-Hao Huang
學號 L26061113
學位類別 碩士
語文別 英文
論文頁數 77頁
口試委員 指導教授-張泰榕
口試委員-張景皓
口試委員-劉明豪
中文關鍵字 機器學習  深度學習  緊密束縛法 
英文關鍵字 Machine Learning  Deep Learning  Tight-Binding Method 
學科別分類
中文摘要 這篇論文使用機器學習來預測Slater-Koster方法的參數。我們訓練電腦將計算出的哈密頓量映射到Slater-Koster參數。並且我們使用這些預測出來的參數來構造Slater-Koster哈密頓量。
在第二章中,我簡單地介紹一些物理知識和深度學習。在這ㄧ部分中,我主要討論深度學習的工作原理。在第三章中,我將介紹緊束縛方法(tight-binding method),深度學習另一面向和無監督學習K均值。在這一部分中,我將討論如何訓練深度學習以及如何使其表現更好。包含K均值的原因是因為我需要使用它來聚類一些數據。
在第4章中,以材料WSe2和材料Sb2Te3為例。首先,我嘗試了許多深度學習模型和技術來預測WSe2的Slater-Koster方法參數。但是,仍有一些深度學習無法預測的參數。因此,我用K均值方法將它們聚類以找到最接近的參數。接下來,我使用相同的步驟來預測Sb2Te3的參數。
結果表明我們可以使用機器學習來找到Slater-Koster參數,但仍有一些結果需要改進。完整的過程分為兩個步驟,我們無法僅使用一種機器學習工具來預測所有參數。當計算的哈密頓量為非Slater-Koster哈密頓量時,預測結果將存在一些偏差。
英文摘要 This article uses machine learning to predict Slater-Koster method’s parameters. We train a machine to map the calculated Hamiltonian to Slater-Koster parameters. And we use these predicted parameters to construct Slater-Koster Hamiltonian.

In chapter 2, I simply introduce some Physics knowledges, and deep learning. In this part, I mainly talk about how deep learning work. In chapter 3, I introduce the tight-binding method, deep learning again and unsupervised learning K-means. In this part, I talk about how to train a deep learning and how to make it perform better. K-means is included because I need use it to cluster some data.

In chapter 4, material Wse2 and material Sb2Te3 are used as my example. First, I tried many deep learning models and techniques to predict WSe2’s Slater-Koster parameters. But there are still some parameters that deep learning cannot predict. Thus, I cluster them with K-means method to find the nearest parameters. Next, I used the same procedure to find the Sb2Te3’s parameters.

Although the results show that we can use machine learning to find Slater-Koster parameters, there are still some results that need improvement. Full procedure is two steps, we can’t predict all parameters just use one machine learning tool. When the calculated Hamiltonian is non-Slater-Koster Hamiltonian, there will be some deviations in the predicted results.
論文目次 1 Introduction 1
2 Theory 4
2.1 Crystal Structure and Reciprocal Lattice 4
2.2 Bloch Functions 4
2.3 Brillouin Zone 4
2.4 Band Structure Theory 5
2.5 Understanding How Deep Learning Works 6
2.5.1 Gradient Descent 9
2.5.2 Stochastic Gradient Descent (SGD) 10
2.5.3 Adding momentum 10
2.5.4 Adagrad and RMSprop 11
2.5.5 ADAM 12
3. Method 13
3.1 Tight-Binding Method 13
3.1.1 LCAO 13
3.1.2 Slater-Koster Method and Table 14
3.2 Density Functional Theory (DFT) 15
3.3 Deep Learning (Deep Neural Network) 17
3.3.1 Basic Neural Network 17
3.3.2 Training Network 19
3.3.3 The Backpropagation algorithm 19
3.3.4 Regularizing networks 20
3.3.4.1 Dropout 21
3.3.4.2 Batch Normalization 21
3.4 K-means Method 22
3.5 Way to Predict the Parameters of Slater-Koster Method 22
4. Results and Discussion 24
4.1 Material WSe2 24
4.1.1 Use DNN to Predict Slater-Koster Parameters 26
4.1.1.1 Simple Start: Build a Simple Deep Learning Model 26
4.1.1.2 Use Different Batch Size 29
4.1.1.3 Increase the Amount of My Data From 10000 sets to 50000 sets 29
4.1.1.3.1 Use Different Optimizer 29
4.1.1.3.2 Add Batch Normalization 33
4.1.1.3.3 Train the Parameters Separately 35
4.1.1.4 Is Data Normalization Good for Learning? 36
4.1.1.4.1 Min-Max Normalization 37
4.1.1.4.2 Z Normalization 38
4.1.1.4.3 No Normalization 40
4.1.2 Use K-means Method to Cluster similar Figure 42
4.1.2.1 Why We Need K-means method? 42
4.1.2.2 Use CNN Model to Help Me Extract the Features 43
4.1.3 Overall process 46
4.2 Material Sb2Te3 52
4.2.1 First Band Structure 55
4.2.1.1 Simple Start 55
4.2.1.2 First Way 56
4.2.1.2.1 Use Deep Learning to Predict Data 58
4.2.1.2.2 Use K-means Method to Cluster Data 59
4.2.1.3 Second Way 66
4.2.2 Second Band Structure 69
4.3 Learn from the Results 73
5. Conclusion 75
6. Reference 76
參考文獻 [1] Shu-Ting Pi (2017). Exploring Novel Materials Using Deep Learning Algorithm https://github.com/pipidog/TBDNN/blob/master/TBDNN%20proposal.pdf
[2] Chollet, F. (2017). Deep Learning with Python . Manning. ISBN: 9781617294433
[3] Pankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexandre G.R. Day, Clint Richardson,Charles K. Fisher, David J. Schwab. A high-bias, low-variance introduction to Machine Learning for physicists, Physics Report, Volume 810, ISSN 0370-1573
[4] Gregory H. Wannier (1937). The Structure of Electronic Excitation Levels in Insulating Crystals, Phys. Rev. 52, 191
[5] J.C. Slater, G.F. Koster (1954). Simplified LCAO Method for the Periodic Potential Problem, Phys. Rev. 94, 1498
[6] Charles Kittel (2015). Introduction to Solid State Physics[8th], Wiley, ISBN:9780471415268
[7] Wahyu Setyawan, Stefano Curtarolo (2010). High-throughput electronic band structure calculations: Challenges and tools, Computational Materials Science, Volume 49, Issue 2, ISSN 0927-0256
[8] LeCun, Yann, Bottou, Léon, Orr, Genevieve B, Müller, Klaus-Robert (1998b). Efficient backprop. In: Neural Networks: Tricks of the Trade. Springer
[9] Bottou, Léon, (2012). Stochastic gradient descent tricks. In: Neural Networks: Tricks of the Trade. Springer, pp. 421–436.
[10] Duchi, John, Hazan, Elad, Singer, Yoram (2011). Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12 (Jul), 2121-2159
[11] Tieleman, Tijmen, Hinton, Geoffrey (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26-31.
[12] Kingma, Diederik P., Ba, Jimmy (2014). Adam: A method for stochastic optimization. ArXiv preprint arXiv:1412.6980.
[13] Mohammad Nakhaee, S. Ahmad Ketabi, Francois M. Peeters (2020). Tight-Binding Studio: A technical software package to find the parameters of tight-binding Hamiltonian, Computer Physics Communications, Volume 254, ISSN 0010-4655
[14] Klaus Capelle (2002). A bird's-eye view of density-functional theory.
[15] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. 15(56):1929−1958
[16] Sergey Ioffe and Christian Szegedy (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ArXiv. 1502.03167
[17] Yann LeCun, Yoshua Bengio and Geoffrey Hinton (2015). Deep Learning. Nature 521, 436–444 (2015). https://doi.org/10.1038/nature14539
論文全文使用權限
  • 同意授權校內瀏覽/列印電子全文服務,於2023-07-10起公開。
  • 同意授權校外瀏覽/列印電子全文服務,於2023-07-10起公開。


  • 如您有疑問,請聯絡圖書館
    聯絡電話:(06)2757575#65773
    聯絡E-mail:etds@email.ncku.edu.tw