Course detail

Machine Learning

FSI-VSCAcad. year: 2026/2027

The course provides an introduction to the theory and methods of machine learning, focusing on their application in solving classification, regression, and clustering tasks.

Language of instruction

Czech

Number of ECTS credits

5

Mode of study

Not applicable.

Entry knowledge

Basic knowledge of statistics, optimization, and programming is expected.

Rules for evaluation and completion of the course

Knowledge and skills are verified by credit and examination. Credit requirements: elaboration of given tasks. Attendance at lectures is recommended, while attendance at practical sessions is mandatory. Practical sessions that a student is unable to attend in the regular term can be made up during a substitute term. The exam is oral and covers the entire course material.

Aims

The aim of the course is to familiarize students with machine learning methods and their applications in classification, regression, and clustering. Students will learn about both parametric and non-parametric classification and regression models, as well as key concepts such as error metrics, regularization, cross-validation, gradient descent, and modern approaches, including boosting and Gaussian mixture models. The course bridges theory and practice, focusing on the design and implementation of machine learning models.

Study aids

Not applicable.

Prerequisites and corequisites

Not applicable.

Basic literature

ALPAYDIN, Ethem. Introduction to machine learning. Third edition. Adaptive computation and machine learning. Cambridge: The MIT Press, [2014]. ISBN 978-0-262-02818-9. (EN)
BISHOP, Christopher M. Pattern recognition and machine learning. Information science and statistics. New York: Springer, c2006. ISBN 978-0-387-31073-2. (EN)
Sima,J., Neruda,R.: Theoretical questions of neural networks, MATFYZPRESS, 1996, ISBN 80-85863-18-9 (CS)

Recommended reading

B. Kosko: Neural Networks and fuzzy systems. Prentice Hall 1992 (EN)
Bishop, C. M.: Pattern Recognition, Springer Science + Business Media, LLC, 2006, ISBN 0-387-31073-8. (EN)

Classification of course in study plans

  • Programme N-AIŘ-P Master's 1 year of study, summer semester, compulsory

Type of course unit

 

Lecture

26 hod., optionally

Teacher / Lecturer

Syllabus

  1. Introduction to Machine Learning: the machine learning model life cycle, learning paradigms and task types, model generalization, cross-validation.
  2. Data Preprocessing: missing values, outliers, normalization/standardization, categorical encoding, variable transformations, basic feature selection.
  3. Regression: linear and polynomial regression, the least squares method, gradient descent, loss functions, performance metrics, redundant variables (multicollinearity), regularization.
  4. Linear Regression-Based Classification, Logistic and Regularized Logistic Regression: basic loss functions and performance metrics for classifiers, the impact of class imbalance on training, classifier performance and evaluation.
  5. Support Vector Machines for Classification and Regression, Kernel Functions.
  6. Perceptron, Multilayer Perceptron, Extreme Learning Machine, Forward and Backpropagation, Gradient Descent Variants, Methods for Reducing Overfitting.
  7. Tree-Based Methods for Classification and Regression: decision trees, splitting criteria, pruning, ensemble methods.
  8. Memory-Based Methods for Classification and Regression: k-nearest neighbors, distance metrics, choice of k and weighting.
  9. Introduction to Probability Theory, Bayesian Classifier, Gaussian Discriminant Analysis, Naive Bayes Classifier.
  10. Gaussian Mixture Models, EM Algorithm.
  11. Clustering: k-means clustering, Gaussian mixture–based clustering, density-based clustering.
  12. Dimensionality Reduction, Boosting.
  13. Review.

Computer-assisted exercise

26 hod., compulsory

Teacher / Lecturer

Syllabus

  1. Introduction to the programming environment.
  2. Data preprocessing.
  3. Linear and polynomial regression using least squares and regularized least squares.
  4. Classification of linearly separable data.
  5. Classification of nonlinearly separable data.
  6. Using multilayer perceptrons for classification and regression tasks.
  7. Using decision trees for classification and regression tasks.
  8. Using the k-nearest neighbors method for classification and regression tasks.
  9. Bayesian classifier, Gaussian discriminant analysis, and Naive Bayes classifier.
  10. Using Gaussian mixture models for classification and regression tasks.
  11. Data clustering.
  12. Dimensionality reduction, Boosting.
  13. Final assessment.