Shine a spotlight into the deep learning "black box". This comprehensive and detailed guide reveals the mathematical and architectural concepts behind deep learning models, so you can customize, maintain, and explain them more effectively. Inside
Math and Architectures of Deep Learning you will find:
- Math, theory, and programming principles side by side
- Linear algebra, vector calculus and multivariate statistics for deep learning
- The structure of neural networks
- Implementing deep learning architectures with Python and PyTorch
- Troubleshooting underperforming models
- Working code samples in downloadable Jupyter notebooks
The mathematical paradigms behind deep learning models typically begin as hard-to-read academic papers that leave engineers in the dark about how those models actually function.
Math and Architectures of Deep Learning bridges the gap between theory and practice, laying out the math of deep learning side by side with practical implementations in Python and PyTorch. Written by deep learning expert Krishnendu Chaudhury, you'll peer inside the "black box" to understand how your code is working, and learn to comprehend cutting-edge research you can turn into practical applications.
Foreword by Prith Banerjee.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
About the technology Discover what's going on inside the black box! To work with deep learning you'll have to choose the right model, train it, preprocess your data, evaluate performance and accuracy, and deal with uncertainty and variability in the outputs of a deployed solution. This book takes you systematically through the core mathematical concepts you'll need as a working data scientist: vector calculus, linear algebra, and Bayesian inference, all from a deep learning perspective.
About the book Math and Architectures of Deep Learning teaches the math, theory, and programming principles of deep learning models laid out side by side, and then puts them into practice with well-annotated Python code. You'll progress from algebra, calculus, and statistics all the way to state-of-the-art DL architectures taken from the latest research.
What's inside - The core design principles of neural networks
- Implementing deep learning with Python and PyTorch
- Regularizing and optimizing underperforming models
About the reader Readers need to know Python and the basics of algebra and calculus.
About the author Krishnendu Chaudhury is co-founder and CTO of the AI startup Drishti Technologies. He previously spent a decade each at Google and Adobe.
Table of Contents 1 An overview of machine learning and deep learning
2 Vectors, matrices, and tensors in machine learning
3 Classifiers and vector calculus
4 Linear algebraic tools in machine learning
5 Probability distributions in machine learning
6 Bayesian tools for machine learning
7 Function approximation: How neural networks model the world
8 Training neural networks: Forward propagation and backpropagation
9 Loss, optimization, and regularization
10 Convolutions in neural networks
11 Neural networks for image classification and object detection
12 Manifolds, homeomorphism, and neural networks
13 Fully Bayes model parameter estimation
14 Latent space and generative modeling, autoencoders, and variational autoencoders
A Appendix
Author: Krishnendu Chaudhury
Binding Type: Paperback
Publisher: Manning Publications
Published: 03/26/2024
Pages: 552
Weight: 2.05lbs
Size: 9.20h x 7.30w x 1.30d
ISBN: 9781617296482
About the AuthorKrishnendu Chaudhury is a deep learning and computer vision expert with decade-long stints at both Google and Adobe Systems. He is presently CTO and co-founder of Drishti Technologies. He has a PhD in computer science from the University of Kentucky at Lexington.