Deep Learning – 2021

Home » Deep Learning – 2021

Welcome to the Deep Learning Lecture Materials

This lecture series will introduce theoretical, historical, and practical aspects of working with deep neural networks. It consists of a total of 14 lectures, as well as bi-weekly practical assignments. The latter are to be completed in small student groups and are discussed in tutorial sessions that run in parallel to the main lectures. An important aspect of this course, which is taught in the context of a cognitive science bachelors program, is an interdisciplinary approach to the topic. Rather than introducing deep learning from a purely engineering perspective, related topics in cognitive science and neuroscience are covered. This will enable students to appreciate the interdisciplinary roots of deep neural networks as well as their relation to biological learning and intelligence. Lecture topics will move from MLPs to recent deep neural network architectures while covering gradient descent, optimisation techniques, convolutional networks, supervised and unsupervised training objectives, autoencoders, generative adversarial networks, recurrent network models, as well as attention and transformers. Where appropriate, topics will be accompanied by case studies that highlight the various use cases in cognitive science and machine learning applications.


Course overview:

  1. Course organisation and how we got here [perceptrons, history and PDP models]
  2. Shallow neural networks [MLPs, cost functions, SGD and backpropagation]
  3. Optimisation 1 [ADAM, rprop, momentum, regularization]
  4. The convolutional trick: CNNs [CNNs, pooling layers, strides, case study: LeNet]
  5. Inferential robustness and adversarial attacks [case study: AlexNet with data parallelization]
  6. Optimisation 2 [dropout, batch-norm, group-norm, investigating learning curves]
  7. Understanding and Visualising CNNs [Feature visualisation and attribution techniques]
  8. Autoencoders and Embeddings [case study compression, StyleTransfer, Word2Vec]
  9. Generative Adversarial Networks [case study: cycleGAN]
  10. Recurrent networks [temporal unrolling, Elman networks, GRU, LSTM, R-CNN/BLT]
  11. Sequence models [seq2seq models for language translation]
  12. Attention in Deep Neural Networks [case study: image captioning systems]
  13. Transformers [case study: BERT/GPT3]
  14. Is deep learning brain-like? [QnA and discussion session]


LECTURE 12: Attention in Deep Neural Networks
Slides: Lecture12_2021.pdf
Knowledge clips: Attention – motivation, Attention modules – under the hood, Attention – use cases
Workgroup assignment: Assignment 6

Knowledge clip 1/3: Motivation

Knowledge clip 2/3: Under the hood

Knowledge clip 3/3: Use cases