20%
62.90
CHF50.30
Download est disponible immédiatement
Understand how neural networks work and learn how to implement them using TensorFlow 2.0 and Keras. This new edition focuses on the fundamental concepts and at the same time on practical aspects of implementing neural networks and deep learning for your research projects.This book is designed so that you can focus on the parts you are interested in. You will explore topics as regularization, optimizers, optimization, metric analysis, and hyper-parameter tuning. In addition, you will learn the fundamentals ideas behind autoencoders and generative adversarial networks.All the code presented in the book will be available in the form of Jupyter notebooks which would allow you to try out all examples and extend them in interesting ways. A companion online book is available with the complete code for all examples discussed in the book and additional material more related to TensorFlow and Keras. All the code will be available in Jupyter notebook format and can be opened directly in Google Colab (no need to install anything locally) or downloaded on your own machine and tested locally.You will: . Understand the fundamental concepts of how neural networks work. Learn the fundamental ideas behind autoencoders and generative adversarial networks. Be able to try all the examples with complete code examples that you can expand for your own projects. Have available a complete online companion book with examples and tutorials.This book is for:Readers with an intermediate understanding of machine learning, linear algebra, calculus, and basic Python programming.
Auteur
Umberto Michelucci is the founder and the chief AI scientist of TOELT Advanced AI LAB LLC. He's an expert in numerical simulation, statistics, data science, and machine learning. He has 15 years of practical experience in the fields of data warehouse, data science, and machine learning. His first book, Applied Deep LearningA Case-Based Approach to Understanding Deep Neural Networks, was published in 2018. His second book, Convolutional and Recurrent Neural Networks Theory and Applications was published in 2019. He publishes his research regularly and gives lectures on machine learning and statistics at various universities. He holds a PhD in machine learning, and he is also a Google Developer Expert in Machine Learning based in Switzerland.
Texte du rabat
Build neural network models theoretically using TensorFlow 2.0. with workable implementations. This new edition focuses on the practical aspects of implementing deep-learning solutions using the new deep learning framework TensorFlow 2.0 and in particular Keras with its rich Python ecosystem.
Written with a more pedagogical approach, this book is designed so that you can focus on the parts you're interested in. Additionally, you'll explores recent advances in the field namely, autoencoders and multitask learning, A short chapter will also describe the differences between PyTorch and TensorFlow 2.0. The book also focuses on introducing you to key concepts of edge computing and automatic differentiation. TensorFlow lite for edge computing is covered to allow you to build models and deploy them on edge devices as Raspberry Pi or Coral devices from Google.
All the code presented in the book will be available in the form of Jupyter notebooks and scripts which would allow you to try out these examples and extend them in interesting ways. A complete GitHub repository will be available with the complete code covered in this second edition.
You will:
Contenu
Chapter 1 : Optimization and neural networks
Subtopics:How to read the bookIntroduction to the book
Chapter 2: Hands-on with One Single NeuronSubtopics:Overview of optimizationA definition of learningConstrained vs. unconstrained optimizationAbsolute and local minimaOptimization algorithms with focus on Gradient DescentVariations of Gradient Descent (mini-batch and stochastic)How to choose the right mini-batch size
Chapter 3: Feed Forward Neural NetworksSubtopics:A short introduction to matrix algebraActivation functions (identity, sigmoid, tanh, swish, etc.)Implementation of one neuron in KerasLinear regression with one neuronLogistic regression with one neuron
Chapter 4: RegularizationSubtopics:Matrix formalismSoftmax activation functionOverfitting and bias-variance discussionHow to implement a fully conneted network with KerasMulti-class classification with the Zalando dataset in KerasGradient descent variation in practice with a real datasetWeight initializationHow to compare the complexity of neural networksHow to estimate memory used by neural networks in Keras
Chapter 5: Advanced OptimizersSubtopics:An introduction to regularizationl_p norml_2 regularizationWeight decay when using regularizationDropoutEarly Stopping
Chapter 6Chapter Title: Hyper-Parameter tuningSubtopics:Exponentially weighted averagesMomentumRMSPropAdamComparison of optimizers
Chapter 7Chapter Title: Convolutional Neural NetworksSubtopics:Introduction to Hyper-parameter tuningBlack box optimizationGrid SearchRandom SearchCoarse to fine optimizationSampling on logarithmic scaleBayesian optimisation
Chapter 8Chapter Title: Brief Introduction to Recurrent Neural NetworksSubtopics:Theory of convolutionPooling and paddingBuilding blocks of a CNN Implementation of a CNN with KerasIntroduction to recurrent neural networksImplementation of a RNN with Keras
Chapter 9: AutoencodersSubtopics:Feed Forward AutoencodersLoss function in autoencodersReconstruction errorApplication of autoencoders: dimensionality reductionApplication of autoencoders: Classification with latent featuresCurse of dimensionalityDenoising autoencodersAutoencoders with CNN
Chapter 10: Metric AnalysisSubtopics: Human level performance and Bayes errorBiasMetric analysis diagramTraining set overfittingHow to split your datasetUnbalanced dataset: what can happenK-fold cross validationManual metric analysis: an example
Chapter 11 Chapter Title: General Adversarial Networks (GANs)Subtopics: <d...