Přístupnostní navigace
E-application
Search Search Close
study programme
Original title in Czech: Informační technologieFaculty: FITAbbreviation: DITAcad. year: 2026/2027
Type of study programme: Doctoral
Study programme code: P0613D140028
Degree awarded: Ph.D.
Language of instruction: Czech
Accreditation: 8.12.2020 - 8.12.2030
Profile of the programme
Academically oriented
Mode of study
Combined study
Standard study length
4 years
Programme supervisor
prof. Ing. Lukáš Sekanina, Ph.D.
Doctoral Board
Chairman :prof. Ing. Lukáš Sekanina, Ph.D.Councillor internal :prof. Dr. Ing. Pavel Zemčík, dr. h. c.prof. Dr. Ing. Zbyněk Raidaprof. RNDr. Josef Šlapal, CSc.prof. Ing. Pavel Václavek, Ph.D.prof. Ing. Tomáš Hruška, CSc.prof. RNDr. Alexandr Meduna, CSc.doc. Dr. Ing. Petr Hanáčekprof. Dr. Ing. Jan Černockýprof. Ing. Adam Herout, Ph.D.doc. Ing. Jan Kořenek, Ph.D.prof. Ing. Tomáš Vojnar, Ph.D.Councillor external :prof.,RNDr. Jiří Barnat, Ph.D.
Fields of education
Study aims
The goal of the doctoral degree programme is to provide outstanding graduates from the master degree programme with a specialised university education of the highest level in certain fields of computer science and information technology, including especially the areas of information systems, computer-based systems and computer networks, computer graphics and multimedia, and intelligent systems. The education obtained within this degree programme also comprises a training and attestation for scientific work.
Graduate profile
Profession characteristics
FIT graduates in general and FIT doctoral graduates in particular do not have a problem finding employment at scientific, pedagogical or management positions both in Czech Republic and abroad.
Fulfilment criteria
The requirements that the doctoral students have to fulfil are given by their individual study plans, which specify the courses that they have to complete, their presupposed study visits and active participation at scientific conferences, and their minimum pedagogical activities within the bachelor and master degree programmes of the faculty. A successful completion of the doctoral studies is conditional on the following:
Study plan creation
The rules are determined by the directions of the dean for preparing the individual study plan of a doctoral student. The plan is to be based on the theme of his/her future dissertation thesis and it is to be approved by the board of the branch.
Availability for the disabled
Brno university of technology provides studies for persons with health disabilities according to section 21 par. 1 e) of the Act no. 111/1998, about universities and about the change and supplementing other laws (Higher Education Act) as amended, and according to the requirements in this field arising from Government Regulation No. 274/2016 Coll., on standards for accreditation in higher education, provides services for study applicants and students with specific needs within the scope and in form corresponding with the specification stated in Annex III to Rules for allocation of a financial contribution and funding for public universities by the Ministry of Education, Youth and Sports, specifying financing additional costs of studies for students with specific needs.Services for students with specific needs at BUT are carried out through the activities of specialized workplace - Alfons counselling center, which is a part of BUT Lifelong Learning Institute - Student counselling section.Counselling center activities and rules for making studies accessible are guaranteed by the university through a valid Rector's directive 11/2017 concerning the status of study applicants and students with specific needs at BUT. This internal standard guarantees minimal stadards of provided services.Services of the counselling center are offered to all study applicants and students with any and all types of health disabilities stated in the Methodological standard of the Ministry of Education, Youth and Sports.
What degree programme types may have preceded
The study programme builds on both the ongoing follow-up Master's programme in Information Technology and the new follow-up Master's programme in Information Technology and Artificial Intelligence.Students can also, according to their needs and outside their formalized studies, take courses and trainings related to the methodology of scientific work, publishing and citation skills, ethics, pedagogy and soft skills organized by BUT or other institutions.
Issued topics of Doctoral Study Program
Practical decision-making in BDI systems is traditionally based on executing individual intentions separately, which often leads to redundant action execution even in cases where different intentions share common parts.
The goal of this work is to extend the practical decision-making of a BDI agent with the ability to identify and exploit shared parts of intentions, such that non-exclusive (shared) actions, which only need to be executed once, can simultaneously contribute to the achievement of multiple intentions.
The work will build on the AgentSpeak(L) interpretation framework and on an approach based on late binding of variables, proposed and experimentally validated in prior research. In this approach, during plan interpretation the agent maintains a context in the form of a set of possible variable substitutions accumulated so far, while the final binding of variables is deferred as long as possible.
This mechanism enables unification of parts of different intentions based on a shared substitution context and the execution of their common action prefixes through a single execution. On this basis, the objective of the work is to design and implement an extended decision-making mechanism that identifies and executes shared parts of intentions while preserving the correctness of BDI execution.
At the same time, the integration of contemporary artificial intelligence methods, in particular attention mechanisms and reinforcement learning, is envisaged for adaptive control over the selection of intention combinations and their shared parts.
The outcome of the work is expected to be an experimental BDI system that reduces redundant action execution and exhibits higher efficiency in practical decision-making compared to traditional BDI interpretations.
Supervisor: Zbořil František, doc. Ing., Ph.D.
Research in Agentic AI for chip design and Electronic Design Automation (EDA) is rapidly shifting from passive, single-point tool optimisation to autonomous, goal-driven workflows. Agentic AI systems use reasoning agents to plan, orchestrate EDA tools, learn from outcomes, and collaborate to manage multi-step tasks throughout the semiconductor design lifecycle.
The goal of the research is to bring AI-native autonomy to various stages of the frontend and backend components of the IC design, including RTL, verification and physical design. An initial step will be to adapt and use specialised models trained on Verilog, netlists, and layout data to understand the semantic structure of electronic circuits, translate requirements into synthesizable code, and iterate automatically using compiler feedback in the context of the RISC-V Automotive Processor explored in the Chips JU project Rigoletto.
Supervisor: Smrž Pavel, doc. RNDr., Ph.D.
Software projects that employ dynamic data structures in combination with pointers are highly prone to memory-related errors (such as null pointer dereferencing, memory leaks, etc.). At the same time, such code is often system-level code used in operating system kernels, shared libraries, or interpreters of higher-level programming languages.
The aim of this dissertation is to build upon existing techniques for the analysis of programs with dynamic memory, in particular those based on Separation Logic and bi-abduction. The work may focus on the C/C++ programming language or another suitable language. It will build on prior work by members of the VeriFIT research group and by Dr. F. Zuleger from TU Wien.
An alternative direction of the dissertation is to build upon current techniques for automated complexity analysis, with the goal of analyzing open code and/or code with complex data structures. This direction will also build on the work of the VeriFIT group, as well as Dr. F. Zuleger from TU Wien and Dr. M. Sinn from the University of Applied Sciences St. Pölten.
The dissertation will be carried out in collaboration with the VeriFIT team at the Faculty of Information Technology, Brno University of Technology, which focuses on program verification techniques for programs with complex data structures (in particular Dr. L. Holík, Prof. T. Vojnar, Ing. T. Dacík, Ing. V. Šoková). With a responsible approach and high-quality results, there is an opportunity to participate in research grant projects, including international ones. There is also the possibility of close collaboration with various international partners of VeriFIT: TU Wien, Austria (Dr. F. Zuleger); Verimag, Grenoble, France (Dr. R. Iosif); IRIF, Paris, France (Prof. A. Bouajjani, Dr. M. Sighireanu).
Within this topic, the student may also actively participate in various grant projects conducted within the VeriFIT group.
Supervisor: Rogalewicz Adam, doc. Mgr., Ph.D.
This dissertation will focus on the development of advanced methods for forensic analysis of visual data in situations with extremely limited availability of training samples. In the field of criminalistics and security agencies, it is common to work with unique, rare, or difficult-to-obtain types of evidence for which extensive annotated datasets do not exist, significantly limiting the applicability of standard deep learning methods. The rapid development of approaches such as few-shot and zero-shot learning, together with the potential of knowledge transfer from large pre-trained models, opens new possibilities for efficient processing of such data and increases the potential for automated support in forensic decision-making.The research will address the analysis of visual data under conditions of data scarcity with a focus on practical applicability, specifically:(1) the design of methods based on few-shot and zero-shot approaches for identification and classification of forensic evidence with minimal training samples;(2) the use of knowledge transfer from large pre-trained models and their adaptation to specific forensic tasks; and(3) the development of methods for synthetic data generation and augmentation to improve the robustness and generalization capabilities of models in real-world forensic practice.
Supervisor: Hanáček Petr, doc. Dr. Ing.
Evaluation of the completeness of test suites with respect to software requirements represents a fundamental challenge in the verification of critical systems, particularly in regulated domains. Current practice is largely based on manual assessment, which is time-consuming and error-prone. This dissertation will focus on the research of formal and automated methods for analyzing requirement coverage by test cases, with an emphasis on the semantic aspects of requirements and their implicit behavior.
The goal of the dissertation is to propose a formal framework for representing requirements and test cases, and to define objective coverage criteria derived from the semantics of requirements (e.g., combinations of conditions, temporal constraints, sequences of actions, or state transitions). On this basis, methods for computing the coverage level of a test suite and for identifying uncovered semantic cases will be investigated.
The research will focus in particular on the following areas:
The dissertation will also include the design and implementation of prototype tools supporting selected aspects of the proposed approach, along with their validation on examples from embedded and avionics systems. Emphasis will be placed on integrating formal methods, software testing, and automated analysis techniques, with the aim of contributing to systematic and repeatable evaluation of test suite quality.
The objective of this thesis is to investigate the cognitive load and stress levels experienced by workers during direct collaboration with robotic systems. The research utilises Mixed Reality (MR) technologies and Digital Twins to perform an in-depth analysis of factors affecting the human psyche in automated environments. The primary task is to identify the causes of stress across various industrial scenarios and propose methods to prevent or mitigate these negative effects through an adaptive interface.
The work bridges empirical testing on real robotic workstations with their digital twins, allowing for the safe simulation and optimisation of critical situations. To quantify workload, biometric signals (e.g., heart rate, eye-tracking, or brain activity) will be used to provide feedback for dynamically adjusting the MR environment. By leveraging augmented reality for navigation and diminished reality to filter out environmental distractions, the research aims to create a workspace that adapts in real-time to the operator's current needs and cognitive capacity.
The contribution of this research lies in defining standards for ergonomic and safe human-robot interaction in the spirit of Industry 5.0. The results will provide the industry with procedures for deploying advanced technologies that increase production efficiency while also protecting employees' mental health and long-term well-being. This approach is essential for the future integration of complex robotics into standard workflows without risking cognitive overload.
Supervisor: Beran Vítězslav, doc. Ing., Ph.D.
Large language models (LLMs) are currently driving much of the progress in the area of artificial intelligence, especially due to their ability to perform a wide range of tasks zero-shot – based on a natural language description of the task – and due to the large amount of world knowledge they possess.
However, the current generation of LLMs still struggles in ways that bar them from becoming truly capable at a wide range of challenging tasks; there is e.g. their:
There is also the limited ability to efficiently learn from experience on the job (which is a component in how humans achieve true competence) and many other struggles. While this is – to a large extent – true of all current models, the issues are typically especially pronounced for small models. This, of course, has serious implications for a lot of applications that require compact models, which can be run locally.
The aim of this research is to:
Relevant publications:
The research will be performed at the Kempelen Institute of Intelligent Technologies (KInIT, https://kinit.sk) in Bratislava in cooperation with industrial partners or researchers from highly respected research units from abroad. A combined (external) form of study and full employment at KInIT is expected.
Supervisor: Gregor Michal, doc. Ing., Ph.D.
Investigate computational musicology and computational models used in this scientific area, including grammars and automata. Study properties of these models, such as the power and descriptional complexity. Concentrate on the power of these models. Publish the results of this study at the highest possible level, including prestigeous international journals. Design new methods of making computer music based upon these models. Apply the achieved methods in music to classify or create selected music passages. Make implementations of the achieved applications. Evaluate them. Compare them against already existing implementations. Publish all the achieved results at the highest possible level, including prestigeous international journals.
Supervisor: Meduna Alexandr, prof. RNDr., CSc.
The aim of this dissertation is to conduct research in the field of generative artificial intelligence—whether it involves diffusion and adversarial models for video generation, generative text models for story creation, the automatic generation of computer code and music, the representation of knowledge in physics and chemistry, the promotion of scientific creativity, or a combination of all these approaches. The work will focus on addressing issues related to human interaction with generated intermediate results, the natural labeling of individual parts and concepts so that it is possible to build upon ongoing results, and the development of methods for modifying datasets and learning procedures to address societal issues associated with the creative models being developed—questions of model fairness, bias, and the incorporation of concepts of so-called responsible artificial intelligence.
Starting from the entire body of knowledge concerning formal models of discontinuous computation, this project will proceed towards the development of new versions of these models, which reflect the current discontinuous computation in a more sophisticated and adequate way. Properties of these newly developed systems will be studied in detail. Their applications will be discussed an implemented all scientific areas that involve discontinuous computation, such as in bioinformatics.
The first results will be published in Acta Informatica in 2025.
Artificial convolutional and transformer neural networks (NNs) have recently been widely used in computationally intensive tasks such as classification, prediction, and recognition. The aim of this dissertation will be to propose methods for optimizing NN inference on embedded devices. The task will be to reduce the energy consumption of the output using advanced techniques, such as finding suitable data and weights representations, network architectures, optimizing computation scheduling on existing platforms, or implementing parts of the NN computation in-house. This research falls within the scope of topics addressed by the EvoAI Hardware research group.
Supervisor: Mrázek Vojtěch, doc. Ing., Ph.D.
Different types of logics and automata are among the most fundamental objects studied and applied in computer science for decades. Nevertheless, there are many unsatisfactorily solved problems in this area, and new, exciting problems related to ever new applications of logics and automata are constantly emerging (e.g., in the formal verification of finite and infinite state systems with various advanced control or data structures, in decision procedures, in program or hardware synthesis, in quantum program verification, or even in methods for efficient search in various types of data or network traffic).
The subject of the dissertation will primarily be the development of the state of the art in efficient work with various logics (e.g. over pointer structures, strings, various arithmetic, temporal logics, etc.). To this end, approaches based on different types of decision diagrams, automata, but also e.g. approaches based on the existence of a model of bounded size or on efficient reductions between different types of logical theories will be investigated. In connection with this, methods for efficient work with decision diagrams and different types of automata (automata over words, trees, infinite words, automata with counters, registers, etc.) will be developed. The work will include theoretical research as well as prototype implementation of the proposed techniques and their experimental evaluation.
The work will be solved in collaboration with the VeriFIT team working on the development of techniques for working with logics and automata and their applications at FIT BUT. There is also the possibility of close cooperation with various foreign partners of VeriFIT: Academia Sinica, Taiwan (Prof. Y. Chen); TU Vienna, Austria (Assoc. F. Zuleger); LSV, ENS Paris-Saclay (Assoc. M. Sighireanu); IRIF, Paris, France (Assoc. A. Bouajjani, Assoc. P. Habermehl, Assoc. C. Enea), Verimag, Grenoble, France (Assoc. R. Iosif); Uppsala University, Sweden (prof. P.A. Abdulla, prof. B. Jonsson); or RPTU, Kaiserslautern, Germany (prof. A.W. Lin).
Supervisor: Lengál Ondřej, doc. Ing., Ph.D.
The aim of this dissertation is to investigate models of embedded intelligence that explicitly account for the energy consumption of specific operations and optimise their performance based on specific constraints imposed by individual devices or the entire system. It will also include implementing selected models on suitable hardware for international projects in which the supervisor is involved.
.
Supervisor: Sekanina Lukáš, prof. Ing., Ph.D.
Explainable AI (XAI) centres on enhancing trust, interpretability, and usability of complex models. Key areas include creating human-centred explanations, establishing objective evaluation metrics for, and balancing accuracy with interpretability. Recently, the research has shifted toward interactive explanations, evaluating how AI models handle data and bias, transparently communicating result uncertainty, and evaluating the quality of the generated explanation.
The research will focus on human-centred and cognitive aspects, aligning AI explanations with human psychology and cognitive abilities, highlighting differences between human-human and AI-human explanations, developing interactive, dialogue-based explanations that allow users to ask "what-if" questions and explore model behaviours, and objectively measuring the quality, understandability, and effectiveness of an explanation for a user. The initial application domain will correspond to fact-checking in the context of the fight against desinformation.
Development in the field of quantum computers is moving forward unstoppably. However, for the effective use of quantum hardware, it is necessary to have the right support at the software level, i.e., quantum algorithms for solving problems, as well as tools for compiling, optimizing and synthesizing quantum programs, mapping them to a specific quantum technology, mapping them to a fault tolerant protocol, analyzing, simulating, and debugging quantum programs, etc.
The subject of the dissertation will primarily be the development of current knowledge in the field of formal methods in quantum program design. The main focus here will be on the use of automata theory, automatic reasoning, and decision diagrams. In connection with this, methods for effective work with decision diagrams and various types of automata (automata over words, trees, infinite words, automata with counters, registers, etc.) will also be developed. The work will include both theoretical research and prototype implementation of the proposed techniques and their experimental evaluation.
The work will be carried out in collaboration with the VeriFIT team at FIT BUT, which develops techniques for working with logics and automata and their applications. There is also the possibility of close cooperation with various foreign VeriFIT partners: Academia Sinica, Taiwan (prof. Y. Chen); Uppsala University, Sweden (prof. P.A. Abdulla, prof. R. S. Thinniyam); or Leiden Institute for Advanced Computer Science, Leiden, the Netherlands (prof. A. Laarman).
Starting from the entire body of knowledge concerning formal systems of distributed computation, this project will proceed towards the development of new versions of these systems, which reflect the current distributed computation in a more sophisticated and adequate way. Properties of these newly developed systems will be studied in detail. Their applications will be primarily discussed an implemented in bioinformatics and language translators.
Starting from the entire body of knowledge concerning formal systems of parallel computation, this project will proceed towards the development of new versions of these systems, which reflect the current parallel computation in a more sophisticated and adequate way. Properties of these newly developed systems will be studied in detail. Their applications will be primarily discussed an implemented in bioinformatics and language translators.
This PhD topic is driven by the needs of users for which the privacy of data is crucial and concentrates on improving AI (especially speech recognition) system in case the user is able to identify and correct systems errors. The topic includes proper evaluations of HIL systems, selection of data to be proposed for correction and actual fine-tuning / adaptation techniques working with large pre-trained models.
Supervisor: Černocký Jan, prof. Dr. Ing.
Large language models (LLMs) are increasingly being used for a wide range of tasks and in various contexts in an interactive (chat-like) manner. The challenge is to make this interaction aligned with human expectations and values while avoiding reinforcing biases or negative social behavior of a model such as sycophancy, manipulativeness, etc. To do this, we need to be able to measure the extent of a positive or a negative behavior of the model and (ideally) have large datasets of human-LLM interactions together with human preferences that can be used to tune the models. For the latter, synthetic data generated by models trained on actual users’ data can sometimes be used instead.
The main goal is to improve the human-LLM interaction by using human inputs (such as their preferences) to fine-tune LLMs’ conversational capabilities while controlling for unwanted behavior (such as biases, sycophancy, manipulativeness, etc.). Alternatively, the topic can focus on measurement of the negative or positive social behavior of LLMs in various situations and contexts, or on improving safety of the models while taking the context and user preferences into account.
The application domains are diverse, including (but not limited to) auditing of LLMs’ safety with respect to various user groups (including children and young adults), misuse of LLMs for disinformation generation, or credibility signals detection and assessment.
The research will be performed at the Kempelen Institute of Intelligent Technologies (KInIT, https://kinit.sk) in Bratislava in collaboration with industrial partners or researchers from highly respected research units involved in international projects. A combined (external) form of study and full employment at KInIT is expected.
Supervisor: Móro Róbert, Ing., Ph.D.
The rapid progress of large language models (LLMs) and other foundation models has transformed natural language processing (NLP), enabling powerful applications such as conversational agents, code assistants, and advanced information access systems. These technologies already affect everyday life and are reshaping how organisations work with text and knowledge.
Despite their success, LLMs still face important open challenges. Many models behave as black boxes, can hallucinate content, reproduce social biases, or fail when moved to new domains, tasks, or low-resource languages. At the same time, there is growing demand for models that are trustworthy, efficient, and easy to adapt, while respecting data, safety, and regulatory constraints.
This PhD project will investigate methods for understanding, adapting, and controlling large language models, with a particular focus on trustworthiness and low-resource / domain-specific NLP. The work can combine theory, modelling, and practical experimentation on real-world datasets.
Illustrative research challenges include:
The topic offers substantial flexibility and can be tailored to the candidate’s interests, ranging from more theoretical work on model behaviour to application-driven research in collaboration with academic or industrial partners.
Supervisor: Šimko Marián, doc. Ing., Ph.D.
Despite the proliferation of large language models (LLMs) with their proclaimed universal applicability, many tasks of automatic text processing remain insufficiently solved. Tasks that need to be solved in low-resource languages are solved with lower success. Tasks that require sensitivity to details, context, or fresh information also remain problematic. Examples at the intersection of both these dimensions include processing data from social media for the purpose of detecting narratives, analyzing discourse, detecting toxic content, detecting organized influence campaigns, revealing manipulation, supporting fact-checking, or auditing social media recommenders.
Since the above problem can be understood as a lack of data for specific tasks, one of the solutions can be to use the capabilities of LLMs, but not directly as task solvers, but rather for generation or augmentation of data samples. Using these, specialized (smaller) models could be created for specific tasks. The advantage of such an approach is a higher degree of control over the final model (we can control the generated data), as well as a lower price and footprint of the final model (it is smaller). On the other hand, the disadvantage of such an approach is its lower straightforwardness and methodological ambiguity.That ambiguity, however, offers an opportunity for research.
The topic of the dissertation envisions research of methods and methodologies for creating automatic approaches for classification (or other automatic processing) of texts in heterogeneous and unstable domains. These are characterized by a lack of resources, primarily labeled data, but secondarily also limited computing power.
The research will be performed at the Kempelen Institute of Intelligent Technologies (KInIT, https://kinit.sk) in Bratislava in cooperation with industrial partners or researchers from highly respected research units. A combined (external) form of study and full employment at KInIT is expected.
Supervisor: Šimko Jakub, doc. Ing., PhD.
The multilingual capabilities of large language models (LLMs) enable us to generate texts in many languages, or even to analyze the existing multilingual texts. However, there are missing standard methods and metrics to measure the quality of multilingual texts. Analysis, evaluation, and annotation of such texts by humans is costly and time-consuming, while the replicability of the annotations is questionable (due to ensuring the annotators consistency). In some languages, it is quite difficult to obtain human annotations, while also balancing the level of expertise and demographic diversity of annotators across languages. For a while, the researchers are experimenting with using LLMs themselves to evaluate various aspects of text quality (e.g., coherence, grammar correctness, linguistic acceptability) in multiple languages, whether the texts are generated by language models or written by humans. Even the LLM annotations can be biased due to internal biases of the meta-evaluation models caused by training data.
It is important to continually increase the robustness of meta-evaluation (to limit biases and increase objectivity), whether by increasing the number of evaluation LLMs or by ensuring diversity via LLM (person-like) behavior modifications. The feasibility of the meta-evaluation method must be considered by assessing the computational efficiency and overall practical usability. Also, it is required to measure the correlation between meta-evaluation and human judgement and to evaluate for which aspects of text quality and in which languages is such meta-evaluation a suitable and usable alternative.
The research will be performed at the Kempelen Institute of Intelligent Technologies (KInIT, https://kinit.sk) in Bratislava in cooperation with industrial partners or researchers from highly respected research units involved in international projects. A combined (external) form of study and full employment at KInIT is expected.
Supervisor: Macko Dominik, doc. Ing., Ph.D.
Currently, there is a dynamic development of artificial intelligence methods, particularly large language models, which enable advanced processing and interpretation of extensive unstructured data. However, efficient knowledge mining from heterogeneous data sources and their transformation into a form suitable for training and improving AI systems remains an open problem. The quality and structure of input knowledge fundamentally influence the resulting accuracy, robustness, and interpretability of these systems.
With the increasing complexity of data and requirements for automated decision-making, there is a growing need to develop methods that allow for systematic extraction, representation, and utilization of knowledge across various domains. This issue has both significant theoretical contribution and broad application potential in the field of intelligent agents and decision support systems.The aim of the doctoral thesis is to design, implement, and experimentally verify new approaches to knowledge mining, representation, and utilization from heterogeneous data sources to improve the performance of AI systems based on language models.
At the same time, the thesis will be conceived generally so that the results are transferable to other areas where working with knowledge and data is key, such as enterprise information systems, decision support systems, autonomous AI agents, or applications in industry, healthcare, or public administration.
Supervisor: Burget Radek, doc. Ing., Ph.D.
Tato disertační práce je zaměřena na rozvoj prostředků pro víceúrovňové modelování a koordinaci akcí v kontextu holonických systémů s využitím Robot Operating System (ROS). Holonické systémy, představující autonomní entity nazývané holony, nabízejí flexibilní přístup k organizaci autonomních prvků v rámci vyšších organizačních úrovní. Koncept holonů nachází uplatnění v širokém spektru oblastí, včetně průmyslové robotiky, inteligentní dopravy, autonomních vozidel, senzorických systémů a obecně v oblastech kyberneticko-fyzikálních systémů (CPS), Průmyslu 4.0 a Internetu věcí (IoT). Robot Operating System (ROS) je framework pro vývoj distribuovaných inteligentních systémů a robotů, umožňující komunikaci a spolupráci mezi autonomními entitami. Cílem této práce je poskytnout nové metody a nástroje, které povedou k efektivní integraci holonických konceptů do prostředí ROS. Zároveň se očekává, že práce přinese inovativní přístupy ke koordinaci akcí mezi autonomními entitami na různých úrovních organizace systému. Experimentální ověření navržených prostředků proběhne v reálných i simulovaných prostředích, s důrazem na možné aplikace v průmyslu a rozvíjejících se technologiích.
Supervisor: Janoušek Vladimír, doc. Ing., Ph.D.
Problem Statement: The importance of mental health has increased significantly over the past decade. However, the methods for the assessment of mental health issues at early stages are still in their infancy compared to the availability of corresponding methods for early assessment of physical health issues. Hence, it is required that due research is done to develop methods for early assessment of abnormalities leading to mental health problems.
Issues with Current Solutions: Unlike physical health parameters, the mental health is assessed through a number of subjective parameters. Hence, there is lack of objective and quantitative methods for mental health assessments. In addition, the patients seek help when their mental health problem is at advanced stage. So, there is lack of continuous monitoring for mental health issues.
Challenges: Many of the abnormalities related to mental health issues are subtle in nature and are related to behavior and other changes in facial expressions, speech and handwriting. In addition, there are changes in cortisol levels, skin conductance, heart rate variability and breathing rate. Hence, there are multiple modalities that should be included for measuring and quantifying any abnormalities related to mental health.
Solution: Every modality has its pros and cons. For example, in neuroimaging, functional magnetic resonance imaging has high spatial resolution (in mm) and low temporal resolution (in seconds) while electroencephalogram has low spatial resolution (in cm) and high temporal resolution (in milliseconds). Combining both of them will result in high spatial as well as high temporal resolution. This research deals with the assessment of abnormalities leading to mental health problems by utilizing multimodal approach. The various modalities may include, but not limited to, electroencephalogram (EEG) brain signals, facial videos, speech audios, handwriting and text from social media. The physiological parameters from various modalities include, but not limited to, the heart rate, breathing rate, dominant emotion, fatigue and stress. Dominant emotion can be classified as positive or negative and then sub-classified as sad, happy, angry etc. Data mining and data fusion techniques will be developed for this multimodal analysis. The corresponding multimodal data is available for this project.
Few Words About Supervision: I have extensive experience of working in the field of neuro-signal and neuroimage processing and I am currently head of a research group in this area. This is a multidisciplinary project and it will involve working with clinicians. However, the core of the project is related to IT in terms of development of a new method. Please feel free to contact me at malik@fit.vutbr.cz
Supervisor: Malik Aamir Saeed, prof., Ph.D.
Different ways of representing 3D data (point clouds, polygonal meshes, volumetric data, depth maps, 2D views) capture the same geometric object from different perspectives - varying in structure, topology, and informational capacity. Current deep learning methods are typically designed for a single specific representation and cannot effectively transfer knowledge of 3D shape across modalities. The dissertation will focus on researching models and mechanisms for multimodal fusion that overcome this limitation.
The research objectives and tasks relevant to the dissertation topic include:
The dissertation will also include the application of the investigated methods within projects or contract research in which the supervisor is involved.
Supervisor: Španěl Michal, doc. Ing., Ph.D.
The work will start with getting familiar with the basics of the problem of voice deep fake detection (DFD), terminology, available techniques, data and challenges (AVSpoof, WildSpoof), with the history and state of the art techniques and tools for speaker recognition (wespeaker toolkit), with state of the art techniques and tools for personalized text to speech (pTTS) synthesis and voice conversion. The topic will then concentrate on the evaluation of human and machine performance in DFD, while concentrating on the motivation aspect of human subjects to simulate true attacks on people. The PhD will then progress in both (1) technical (advances in deepfake generation and detection technology) and (2) human-aspects. The work counts on collaboration with specialists from psychology and sociology.
Investigate philosophical, logical and mathematical studies by crucially important thinkers, whose works are central to the philosophical foundations of computer science. Pay a special attention to the works by Rudolf Carnap, Kurt Gödel, Bertrand Russel, Ludwig Wittgenstein, Alan Turing, Bertrand Otto Neurath, Herbert Feigl, Philipp Frank, Friedrich Waismann, Hans Hahn, Hans Reichenbach, Gustav Hempel, Alfred Tarski, Willard Van Orman Quine a Alfred Ayer. Make a systematic and compact body of knowledge that explains the philosophical and logical fundamentals of computer science in detail. Publish achieved results in prestige international-level journals. A summary of all these results publish as a monograph at Springer.
The recommender systems are an integral part of almost every modern Web application. Personalized, or at least adaptive, services have become a standard that is expected by the users in almost every domain (e.g., news, chatbots, social media, or search).
Obviously, personalization has a great impact on the everyday life of hundreds of million users across many domains and applications. This results in a major challenge - to propose methods that are not only accurate but also trustworthy and fair. Such a goal offers plenty of research opportunities in many directions:
There are several application domains where these research problems can be addressed, e.g., search, e-commerce, social networks, news, and many others.
Supervisor: Kompan Michal, doc. Ing., PhD.
The goal of the doctoral program will be to develop the newly proposed technique, Malleable Glyphs, in the fields of human-computer interaction, visualization, and computer vision. The researcher will develop machine learning and computer vision algorithms for glyphs. The researcher will also co-organize The Malleable Glyph Challenge, design new glyph modalities, evaluate glyphs submitted by students and individuals from third-party institutions, and create a glyph taxonomy.
Supervisor: Herout Adam, prof. Ing., Ph.D.
This PhD topic is focused on speech recognition and data processing for ATC-pilot communication in aviation. It will cover all components of automatic speech recognition (ASR), i.e. data processing, acoustic model, vocabulary (including special aviation terminology), language model as well as interaction with sources of meta-data (radar information, airport). Special attention will be paid to the use of large pre-trained model, code-switching, reliable language identification from short segments and adaptation to the conditions of given airport with crawled data.
The proliferation of AI-driven recommendation systems on social media has intensified the need for methods that can anticipate how users engage with presented content. While current “digital twin” approaches simulate the behaviour of specific individuals, this PhD project is motivated by the need to predict the next interaction of user archetypes, i.e., personas defined by demographic and interest attributes. With the rapid evolution of generative AI models, including LLMs and multimodal systems, their capabilities to forecast user actions such as clicking, liking, skipping, or dwelling on social-media content remain unexplored.
The core objective of the PhD research is to investigate and compare AI-based approaches for predicting the next user interaction in social media environments, focusing on user archetypes rather than individuals. To do this, the work will explore generative modelling strategies (e.g., LLM-based predictors), multimodal encoders, and hybrid architectures capable of processing and reacting to streamed online content. A significant part of the thesis will address real-time content annotation, identifying which text/image/video attributes best predict behavioural responses, and how Parameter-Efficient Finetuning Techniques (PEFTs) can be used to adapt models rapidly and cost-effectively. The project will also analyse evaluation methodologies for sequential user-interaction forecasts, including simulation-based benchmarks that reflect realistic platform dynamics.
The domain of application will be algorithmic auditing, a research practice that evaluates how algorithmic systems operate and whether they comply with legal, ethical, or performance expectations. More accurate next-interaction prediction models can make these auditing approaches substantially more authentic, enabling simulated agents to behave in a more organic, human-like manner when interacting with recommender systems. Such improvements can strengthen audit validity, allow testing at scale, and produce a deeper understanding of the impact that recommender design has on different user archetypes.
Supervisor: Srba Ivan, Ing., Ph.D.
The goal of the doctoral study is to analyze the characteristics of modern and popular user interfaces, with a focus on cognitive load, addiction-inducing techniques, the perception of AI-generated content, the perception of advertising, and other significant contemporary phenomena. The student will prototype user interface elements and design and conduct user tests that will enable a deeper understanding of the phenomena under study, highlight existing risks, and lead to the design of user interface elements that will represent alternatives beneficial to the user.
Current challenges that the energy sector faces today include maintaining stability in the energy grid, aggregation of flexibility, operation of energy communities, smart charging of electric vehicles, and efficient use of battery storage. Renewable resources are strongly dependent on weather, so weather prediction is also a focus of interest.
The ENVIRO team at KInIT is dedicated to the aforementioned tasks, whereas the proposed solutions are based on machine learning and artificial intelligence. We investigate and experiment with various methods such as taking into account physical laws in machine learning models (physics-informed machine learning), transfer or federative learning, as well as reinforcement learning, a currently popular approach used to optimize decision-making in a dynamic environment. Foundational models and their comparison with traditional approaches also offer an interesting direction of research. We solve tasks that process time series as well as image data using computer vision methods.
The aim of the PhD thesis will be to propose a solution to a selected energy or environmental problem (for example, flexibility aggregation or climate modeling/weather forecasting) by applying advanced artificial intelligence approaches. The selection of appropriate machine learning methods will depend on the specifics of the problem being solved and the choice of the doctoral candidate.
Supervisor: Rozinajová Věra, Doc., Ph.D.
This thesis is intended to explore the possible applications of ZK constructs in the context of system security and/or public blockchains. ZK constructs are used to provide public verification of the correctness of a certain computation or operation without revealing any private data related to the computation/operation. In this way, it is possible to implement, for example, public voting or auction protocols that preserve the privacy of data publicly produced by distributed participants. The most common ZK constructs are often instantiated by schemes that provide homomorphic encryption, such as ElGammal encryption or integer arithmetic fields over modulo N. However, the feasibility of these constructs in the domain of the public blockchain may vary due to possible high costs, or security aspects. The goal of this thesis is to analyze and quantify these existing options and implement the most meaningful (and novel) applications in the system security and or decentralized blockchains.
Supervisor: Homoliak Ivan, doc. Ing., Ph.D.
Responsibility: Ing. Jiří Dressler