About this course

This is a very different kind of course, taught in a very different way. We have spent as much time studying the research into effective education techniques as we have studying the research into deep learning—one of the biggest differences that you'll see as a result is that we teach "top down" rather than "bottom up". For instance, you'll learn how to use deep learning to solve your problems in week 1, but will only start to learn why it works in week 2! And you'll spend a lot more time learning how to write effective code and use effective processes than you will on learning mathematical formalisms. For full details on the teaching approach, please see our article A unique path to deep learning expertise. And for more information about some of the great education researchers that have inspired and taught us, read our article Providing a Good Education in Deep Learning.

In this course we don't just show how to create "OK" models. We show how to create state of the art models, and we use transparent benchmarks to do so. Wherever possible, we benchmark our models against the top performers in Kaggle competitions. So you'll find that we're using Kaggle datasets a lot. As well as the important point that this gives the best possible benchmark (Kaggle competitions always results in accuracy far beyond academic best papers), it also means we're using well studied datasets where we can learn from the many blog and forum posts around the internet looking at them from every angle. And perhaps most importantly, it enforces that we use rigorous machine learning practices, of having well-defined training, validation, and test sets. Practicing on Kaggle competitions is great experience for creating and running effective machine learning processes in your own organization.

We assume that everyone taking this course has at least one year of coding experience. The course uses python as the teaching language, so if you don't already know python then we assume that you'll spend the time to learn—for an experienced coder you should find that python is quite an easy language to learn. Help getting started with the various data science python libraries we use is available on the course wiki.

Special thanks to the many community members who helped in so many ways, and in particular: Bradley Kenstler, who contributed widely to the notes and notebooks whilst interning at fast.ai; and Lin Crampton, who transcribed all the videos, which helps both non-native speakers as well as those with hearing difficulties.

You will learn how to:

  • Set up your own GPU server in the cloud
  • Use the keras library in python to train and run deep learning models
  • Build, debug, and visualize a state of the art convolutional neural network (CNN) for recognizing images
  • Get great results even from small datasets, by using transfer learning and semi-supervized learning
  • Understand the components of a neural network, including activation functions, dense and convolutional layers, and optimizers
  • Build, debug, and visualize a recurrent neural network (RNN) for natural language processing (NLP), including creating a Gated Recurrent Unit (GRU) RNN from scratch in theano
  • Create and use specialized architectures for dealing with localization, multi-scale images, etc
  • Recognize and deal with over-fitting, by using data augmentation, dropout, batch normalization, and similar techniques
  • Build state of the art recommendation systems using neural-network based collaorative filtering

More information about the syllabus is available in this post (although please note it was originally written for our in-person certificate students).

So, are you ready to get started on your deep learning journey? Remember: you'll need to set aside around 10 hours per week for the next 7 weeks to tackle this material effectively. Make a commitment to yourself now to put in the time, in order to get the results you want


The team

Jeremy Howard

Founding researcher

Deep learning researcher and educator. Faculty at USF and Singularity University. Previously - CEO Enlitic; President Kaggle; CEO Fastmail

Rachel Thomas

Founding researcher

Deep learning researcher and educator. Researcher at USF Data Institute. Previously - Duke Math PhD; Quant; Data Scientist at Uber; Instructor at Hackbright.