Probabilistic modelling and reasoning form a cornerstone of modern ML and AI. The module will recap relevant concepts from probability theory, including Bayes’ theorem and conditional independence. It will then cover marginalisation, sum-product decomposition, Markov blankets, classic hidden Markov models, expectation-maximisation, probabilistic graphical models and data completion. The module will also cover basic information theory, including entropy, mutual information and Shannon’s theorem.
Upon completion of the module the student will be able to:
- formulate, interpret and combine various probabilistic and information theoretic concepts common in ML and AI approaches;
- design and implement relevant probabilistic solutions to common data-centric problems, where modelling, reasoning and performing inference under uncertainty are key requirements;
- judge probabilistic approaches to ML problem solving, in terms of their strengths and limitations;
- apply probabilistic modelling and reasoning methods to own research work.