TTIC 31230: Fundamentals of Deep Learning

David McAllester

Winter 2019

This year the course will not involve programming assignments or class projects. There will be problem sets but the grade will be based entirely on exams including a final. Exams will include problems sampled from the problem sets plus new problems. I will generally give permission to take the class but prospective students might want to look at the first lecture slides and the associated problems to get a sense of the level of mathematical maturity assumed.

The course will involve reading and writing pseudo-code corresponding to code in frameworks such as PyTorch. This is analogous to the use of pseudo-code in an algorithms class as distinct from actual programming in a programming class.

This course covers the topics listed below. Most topics are relevant to most applications --- applications to natural language processing, computer vision, speech recognition, computational biology, and computational chemistry will be integrated into the presentations of the general methods.

  1. Information theory: entropy, cross-entropy, KL-divergence, mutual information.
  2. Deep learning frameworks: computation graphs, back-propagation, minibatching.
  3. Basic Architectures: multi-layer perceptrons, convolutional neural networks, Einstein notation.
  4. More advanced architectures: gated RNNs (LSTMs), ResNet, attention.
  5. Stochastic gradient descent (SGD): standard variations (Vanilla, Adam, RMSProp), minibatch scaling laws, second order methods, Hessian-vector products, SGD-friendly initialization.
  6. Generalization and Regularization: PAC-Bayesian generalization bounds, L2 regularization (shrinkage), dropout.
  7. Autoencoders: rate-distortion autoencoding, variational autoencoding (VAEs) and the evidence lower bound (the ELBO), vector quantized VAEs (VQ-VAE).
  8. Deep graphical models: expectation maximization (EM), expectation gradient (EG), connectionsist temporal classification (CTC), various EG approximations.
  9. Generative Adversarial Networks (GANs): Adversarial optimization, Jensen-Shannon divergence, mode collapse, Wasserstein GANs, progressive GANs.
  10. Deep Reinforcement Learning: The REINFORCE algorithm, policy-gradient theorems, DQN, A3C, AlphaZero.
Exam Schedule:
  1. Tuesday, January 15, 10% of grade, class 3
  2. Tuesday, January 29, 20% of grade, class 7
  3. Tuesday, February 12, 20% of gradem=, class 11
  4. Tuesday, February 26, 20% of grade, class 15
  5. Final, Tuesday, March 19, 1:30-3:30, TTI 526B, 30% of grade
Office Hours: Mondays 9:00-11:00, TTIC 530 and 1:00-3:00, sheTTIC 435

Lectures Slides and Course Material (under development --- please refresh for latest version):

  1. The Fundamental Equations of Deep Learning
  2. Back-Propagation and Frameworks
  3. The Educational Framework (EDF) written in Python/NumPy
  4. Convolutional Neural Networks (CNNs)
  5. Controling Gradients: Initialization, Batch Normalization, ResNet and Gated RNNs
  6. Language Modeling, Machine Translation and Attention
  7. First Order Stochastic Gradient Descent (SGD)
  8. Regularization
  9. Rate-Distortion Autoencoders (RDAs)
  10. Variational Autoencoders (VAEs) and Noisy Channel RDAs
  11. Generative Adversarial Networks (GANs)
  12. Pretraining
  13. Reinforcement Learning (RL)
  14. AlphaZero
  15. Deep Graphical Models
  16. Connectionist Temporal Classification (CTC)
  17. Gradients as Dual Vectors, Hessian-Vector Products, and Information Geometry
  18. The Black Box Problem
  19. Algorithms for Unfriendly Graphical Models
  20. The Quest for Artificial General Intelligence (AGI)