Tiefpreis
CHF58.50
Fehlt zur Zeit beim Verlag. Wird so schnell wie möglich geliefert.
NVIDIA's Full-Color Guide to Deep Learning: All StudentsNeed to Get Started and Get Results
Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this text can be used for students with prior programming experince but with no prior machine learning or statistics experience.
After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains a natural language translator and a system generating natural language descriptions of images.
Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning.
NVIDIA's Full-Color Guide to Deep Learning: All You Need to Get Started and Get Results"To enable everyone to be part of this historic revolution requires the democratization of AI knowledge and resources. This book is timely and relevant towards accomplishing these lofty goals."--From the foreword by Dr. Anima Anandkumar, Bren Professor, Caltech, and Director of ML Research, NVIDIA"Ekman uses a learning technique that in our experience has proven pivotal to successasking the reader to think about using DL techniques in practice. His straightforward approach is refreshing, and he permits the reader to dream, just a bit, about where DL may yet take us."--From the foreword by Dr. Craig Clawson, Director, NVIDIA Deep Learning InstituteDeep learning (DL) is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to DL. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience.After introducing the essential building blocks of deep neural networks, such as artificial neurons and fully connected, convolutional, and recurrent layers, Magnus Ekman shows how to use them to build advanced architectures, including the Transformer. He describes how these concepts are used to build modern networks for computer vision and natural language processing (NLP), including Mask R-CNN, GPT, and BERT. And he explains how a natural language translator and a system generating natural language descriptions of images.Throughout, Ekman provides concise, well-annotated code examples using TensorFlow with Keras. Corresponding PyTorch examples are provided online, and the book thereby covers the two dominating Python libraries for DL used in industry and academia. He concludes with an introduction to neural architecture search (NAS), exploring important ethical issues and providing resources for further learning.Explore and master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagationSee how DL frameworks make it easier to develop more complicated and useful neural networksDiscover how convolutional neural networks (CNNs) revolutionize image classification and analysisApply recurrent neural networks (RNNs) and long short-term memory (LSTM) to text and other variable-length sequencesMaster NLP with sequence-to-sequence networks and the Transformer architectureBuild applications for natural language translation and image captioningNVIDIA's invention of the GPU sparked the PC gaming market. The company's pioneering work in accelerated computing--a supercharged form of computing at the intersection of computer graphics, high-performance computing, and AI--is reshaping trillion-dollar industries, such as transportation, healthcare, and manufacturing, and fueling the growth of many others.Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
Autorentext
Magnus Ekman, Ph.D., is a director of architecture at NVIDIA Corporation. His doctorate is in computer engineering, and he is the inventor of multiple patents. He was first exposed to artificial neural networks in the late nineties in his native country, Sweden. After some dabbling in evolutionary computation, he ended up focusing on computer architecture and relocated to Silicon Valley, where he lives with his wife Jennifer, children Sebastian and Sofia, and dog Babette. He previously worked with processor design and R&D at Sun Microsystems and Samsung Research America, and has been involved in starting two companies, one of which (Skout) was later acquired by The Meet Group, Inc. In his current role at NVIDIA, he leads an engineering team working on CPU performance and power efficiency for system on chips targeting the autonomous vehicle market.
As the Deep Learning (DL) field exploded the past few years, fueled by NVIDIA's GPU technology and CUDA, Dr. Ekman found himself in the middle of a company expanding beyond computer graphics into becoming a deep learning (DL) powerhouse. As a part of that journey, he challenged himself to stay up-to-date with the most recent developments in the field. He considers himself to be an educator, and in the process of writing Learning Deep Learning (LDL), he partnered with the NVIDIA Deep Learning Institute (DLI), which offers hands-on training in AI, accelerated computing, and accelerated data science. He is thrilled about DLI's plans to add LDL to its existing portfolio of self-paced online courses, live instructor-led workshops, educator programs, and teaching kits.
Inhalt
Foreword by Dr. Anima Anandkumar xxi
Foreword by Dr. Craig Clawson xxiii
Preface xxv
Acknowledgments li
About the Author liii
Chapter 1: The Rosenblatt Perceptron 1
Example of a Two-Input Perceptron 4
The Perceptron Learning Algorithm 7
Limitations of the Perceptron 15
Combining Multiple Perceptrons 17
Implementing Perceptrons with Linear Algebra 20
Geometric Interpretation of the Perceptron 30
Understanding the Bias Term 33
Concluding Remarks on the Perceptron 34
Chapter 2: Gradient-Based Learning 37
Intuitive Explanation of the Perceptron Learning Algorithm 37
Derivatives and Optimization Problems 41
Solving a Learning Problem with Gradient Descent 44
Constants and Variables in a Network 48
Analytic Explanation of the Perceptron Learning Algorithm 49
Geometric Description of the Perceptron Learning Algorithm 51
Revisiting Different Types of Perceptron Plots 52
Using a Perceptron to Identify Patterns 54
Concluding Remarks on Gradient-Based Learning 57
Chapter 3: Sigmoid Neurons and Backpropagation 59
Modified Neurons to Enable Gradient Descen…