This 3-hour course (video + slides) offers developers a quick introduction to deep-learning fundamentals, with some TensorFlow thrown into the bargain.Deep learning (aka neural networks) is a popular approach to building machine-learning models that is capturing developer imagination. If you want to acquire deep-learning skills but lack the time, I feel your pain.In university, I had a math teacher who would yell at me, “Mr. Görner, integrals are taught in kindergarten!” I get the same feeling today, when I read most free online resources dedicated to deep learning. My kindergarten education was apparently severely lacking in “dropout lullabies,” “cross-entropy riddles,” and “relu-gru-rnn-lstm monster stories.” Yet, these fundamental concepts are taken for granted by many, if not most, authors of online educational resources about deep learning.To help more developers embrace deep-learning techniques, without the need to earn a Ph.D., I have attempted to flatten the learning curve by building a short crash-course (3 hours total). The course is focused on a few basic network architectures, including dense, convolutional and recurrent networks, and training techniques such as dropout or batch normalization. (This course was initially presented at the Devoxx conference in Antwerp, Belgium, in November 2016.) By watching the recordings of the course and viewing the annotated slides, you can learn how to solve a couple of typical problems with neural networks and also pick up enough vocabulary and concepts to continue your deep learning self-education — for example, by exploring TensorFlow resources. (TensorFlow is Google’s internally developed framework for deep learning, which has been growing in popularity since it was released as open source in 2015.)
Overview
Syllabus
Chapter 1: Introduction; handwritten digits recognition (the simplest neural network) (Video | Slides)
Chapter 2: Ingredients for a tasty neural network + TensorFlow basics (Video | Slides)
Chapter 3: More cooking tools: multiple layers, relu, dropout, learning rate decay (Video | Slides)
Chapter 4: Convolutional networks (Video | Slides)
Chapter 5: Batch normalization (Video | Slides)
Chapter 6: the high level API for TensorFlow (Video | Slides)
Chapter 7: Recurrent neural networks (and fun with Shakespeare) (Video | Slides)
Chapter 8: Google Cloud Machine Learning platform (Video | Slides)
Chapter 2: Ingredients for a tasty neural network + TensorFlow basics (Video | Slides)
Chapter 3: More cooking tools: multiple layers, relu, dropout, learning rate decay (Video | Slides)
Chapter 4: Convolutional networks (Video | Slides)
Chapter 5: Batch normalization (Video | Slides)
Chapter 6: the high level API for TensorFlow (Video | Slides)
Chapter 7: Recurrent neural networks (and fun with Shakespeare) (Video | Slides)
Chapter 8: Google Cloud Machine Learning platform (Video | Slides)
Tags
Reviews
4.0 rating, based on 2 Class Central reviews
Showing Class Central Sort
-
A Devoxx conference talk of 2.5 hours on deep learning. Cover a lot of the important concepts. Good to have a first impression about deep learning or a refresher if you did deep learning before. Don't expect to be fluent and be able to practice DL after seeing this video. You need far more learning and exercise for that.
-