Sequence Modelling

This module deals with techniques to model and predict temporally varying data, such as financial time series data, weather data, audio and video signals. It begins with classical state-space models, linear as well as hidden Markov models, then recurrent neural networks and the most common modular ways of constructing them, e.g. with long short term memory gates. Concurrent neural network architectures, like the Transformer model, that processes the symbols of an entire sequence concurrently (or in a series of concurrent passes) will also be covered, and the module ends with methods to combine the above.

Upon completion of the module the student will be able to:

  • practically apply deep learning techniques to model temporally varying data such as financial time series, audio or video signals;
  • formulate the difference between state-space and auto-regressive modelling of temporal data;
  • analyse, compare and synthesise different approaches to sequence modelling, from a practical and theoretical point of view;
  • discuss current problems and state-of-the-art approaches pertaining to sequence modelling, from an ML and AI perspective.