Course detail

Stochastic Modelling

FSI-S2M-AAcad. year: 2026/2027

The course focuses on Markov Chain Monte Carlo (MCMC) algorithms.
The first part deals with the fundamentals of the theory of Markov chains with continuous (real-valued) state spaces and the existence of their stationary distributions.
Next, it describes the derivation of algorithms that implement these chains and analyzes their convergence.
The final part presents examples of MCMC applications in data analysis and machine learning.

Language of instruction

English

Number of ECTS credits

3

Mode of study

Not applicable.

Entry knowledge

Probability theory and mathematical statistics, mathematical and functional analysis.

Rules for evaluation and completion of the course

Preparation of a semester project and an oral examination.

Aims

Introduction of students to the basics of the theory of Markov chains with a continuous state variable and their use for sample generation. Students will gain an overview of the application of this theory in Bayesian estimation and in typical examples of engineering practice.  

Study aids

Not applicable.

Prerequisites and corequisites

Not applicable.

Basic literature

Hanada, M., Matsuura, S. MCMC from Scratch: A Practical Introduction to Markov Chain Monte Carlo. Springer Nature, 2022. (EN)
Charles J. Geyer, Bayesian Inference via Markov Chain Monte Carlo (MCMC), University of Minnesota, 2025 (EN)
Charles J. Geyer, Markov Chain Monte Carlo: Lecture notes, University of Minnesota, 2005 (EN)

Recommended reading

Brooks, S., Gelman, A., Jones, G., and Meng, X.-L. Handbook of Markov Chain Monte Carlo. Chapman and Hall/CRC, 2011. (EN)
Gamerman, D., Lopes, H.F. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference, Chapman & Hall/CRC Texts in Statistical Science, CRC Press, 2006. (EN)

Classification of course in study plans

  • Programme N-MAI-A Master's 1 year of study, winter semester, elective

Type of course unit

 

Exercise

26 hod., compulsory

Teacher / Lecturer

Syllabus

Probability measure, Bayesian estimations, motivation for using MCMC
Markov chains with discrete state space (ergodic and reversible chains)
Markov chains with continuous state space
Stationary distribution of a Markov chain
Metropolis and Metropolis-Hastings algorithms
Effect of proposal density, rejection criterion, autoregressive function, Gibbs algorithm
Evaluation of MCMC algorithm results
Hamilton’s equations, Hamiltonian Monte Carlo, parameter selection in HMC, No-U-Turn algorithm
Bayesian regression, Bayesian neural networks
Natural language processing (Latent Dirichlet Allocation)
Bayesian inverse problem (parameter estimation in differential equations)
Graph tasks, combinatorial problems, traveling salesman problem