Prix bas
CHF161.60
Pas encore paru. Cet article sera disponible le 01.01.2025
Auteur
Ioannis Stefanou is Professor at ECN, France, and leads several geomechanics projects. His main research interests include mechanics, geomechanics, control, induced seismicity and machine learning.
Félix Darve is Emeritus Professor at the Soils Solids Structures Risks (3SR) laboratory, Grenoble-INP, Grenoble Alpes University, France. His research focuses on computational geomechanics.
Texte du rabat
Machine learning has led to incredible achievements in many different fields of science and technology. These varied methods of machine learning all offer powerful new tools to scientists and engineers and open new paths in geomechanics. The two volumes of Machine Learning in Geomechanics aim to demystify machine learning. They present the main methods and provide examples of its applications in mechanics and geomechanics. Most of the chapters provide a pedagogical introduction to the most important methods of machine learning and uncover the fundamental notions underlying them. Building from the simplest to the most sophisticated methods of machine learning, the books give several hands-on examples of coding to assist readers in understanding both the methods and their potential and identifying possible pitfalls.
Contenu
Preface ix
Ioannis STEFANOU and Félix DARVE
Chapter 1 Overview of Machine Learning in Geomechanics 1
*Ioannis STEFANOU*
1.1. What exactly is machine learning? 1
1.2. Classification of ML methods 7
1.2.1. Supervised versus unsupervised ml 7
1.2.2. Batch versus online ml 9
1.2.3. Instance-based versus model-based ml 10
1.3. ML and geomechanics 12
1.4. Libraries for ml 16
1.5. Bias in ML and limitations 16
1.6. What to expect from these volumes? 19
1.7. Acknowledgments 20
1.8. References 20
Chapter 2 Introduction to Regression Methods 31
*Filippo MASI*
2.1. Introduction 32
2.2. Linear regression 34
2.2.1. Example 38
2.3. Gradient descent 41
2.3.1. Batch GD 46
2.3.2. Stochastic GD 48
2.3.3. Mini-batch GD 50
2.4. Data preprocessing and model validation 54
2.4.1. Feature scaling 54
2.4.2. Test and validation of a model 56
2.5. Nonlinear regression 58
2.5.1. End-to-end example 59
2.6. Regularization techniques 66
2.6.1. Over- and under-determined systems 66
2.6.2. Regularized regression 69
2.7. Challenges in generalization and extrapolation 72
2.7.1. Interpretable models and where to find them 74
2.8. Bayesian regression 81
2.8.1. Linear Bayesian regression 82
2.8.2. GP regression 85
2.9. Conclusions 89
2.10. References 90
Chapter 3 Unsupervised Learning: Basic Concepts and Application to Particle Dynamics 93
*Noel JAKSE*
3.1. Introduction 93
3.2. Basic concepts 95
3.2.1. Representation of the data: Feature extraction and selection 95
3.2.2. Distance and similarity metrics 97
3.3. Unsupervised learning techniques 99
3.3.1. Clustering 99
3.3.2. Dimensionality reduction 102
3.4. Application to particle dynamics 104
3.4.1. Topological description of local structures 106
3.4.2. Clustering local environments during nucleation 109
3.5. Conclusion 111
3.6. Acknowledgements 112
3.7. References 112
Chapter 4 Classification Techniques in Machine Learning 117
*Noel JAKSE*
4.1. Introduction 117
4.2. Classification techniques 119
4.2.1. General considerations 119
4.2.2. Typical workflow for classification 120
4.2.3. Evaluation metrics 122
4.2.4. Standard classification algorithms 125
4.3. AL in classification 137
4.3.1. Application of classification 141
4.4. Conclusion 142
4.5. Acknowledgments 143
4.6. References 143
Chapter 5 Artificial Neural Networks: Learning the Optimum Statistical Model from Data 145
*Filippo GATTI*
5.1 Why PyTorch? 146
5.2. Introduction to sampling theory 150
5.2.1. Statistical models and maximum likelihood estimator 156
5.2.2. On the MLE optimization problem 173
5.2.3. The Fisher information: geometric interpretation 176
5.2.4. The Fisher information: statistical interpretation 177
5.2.5. The principle of maximum entropy (MaxEnt) 184
5.3. Optimizing a neural network 188
5.3.1. First-order gradient descent for empirical loss minimization 195
5.3.2. Second-order gradient descent methods 202
5.3.3. The role of BatchNorm 209
5.3.4. Stochastic gradient descent 212
5.3.5. Beyond SGD: the role of "momentum" 215
5.3.6. Beyond classical momentum SGD: the Nesterov algorithm 218
5.3.7. Optimizing with adaptive learning rates 220
5.4. References 232
List of Authors 237
Index 239
Summary of Volume 2 243