Course detail
Applications of Parallel Computers
FIT-PDDAcad. year: 2021/2022
The course gives an overview of usable parallel platforms and models of programming, mainly shared-memory programming (OpenMP), message passing (MPI) and data-parallel programming (CUDA, OpenCL). A parallelization methodology is completed by performance studies and applied to a particular problem. The emphasis is put on practical aspects and implementation.
Language of instruction
Mode of study
Guarantor
Department
Learning outcomes of the course unit
Parallel architectures with distributed and shared memory, programming in C/C++ with MPI and OpenMP, GPGPU, parallelization of basic numerical methods.
Prerequisites
Co-requisites
Planned learning activities and teaching methods
Assesment methods and criteria linked to learning outcomes
Course curriculum
Work placements
Aims
Topics for the state doctoral exam (SDZ):
- Indicators and laws of parallel processing, function of constant efficiency, scalability.
- Parallel processing in OpenMP, SPMD, loops, sections, tasks and synchronization primitives.
- Architectures with shared memory, UMA, NUMA, cache coherence protocols.
- Blocking and non-blocking pair-wise communications in MPI.
- Collective communications in MPI, parallel input-output.
- Architecture of superscalar processors, algorithms for out-of-order instruction execution.
- Data parallelism SIMD and SIMT, HW implementation and SW support.
- Architecture of graphics processing units, differences from superscalar CPUs.
- Programming language CUDA, thread and memory models..
Specification of controlled education, way of implementation and compensation for absences
Recommended optional programme components
Prerequisites and corequisites
- recommended prerequisite
Practical Parallel Programming
Basic literature
Recommended reading
David Patterson John Hennessy: Computer Organization and Design MIPS Edition, Morgan Kaufmann, 2013, s. 800, ISBN: 978-0-12-407726-3
Kirk, D., and Hwu, W.: Programming Massively Parallel Processors: A Hands-on Approach, Elsevier, 2017, s. 540, ISBN: 978-0-12-811986-0
Victor Eijkhout: Introduction to High Performance Scientific Computing, 2015, 978-1257992546
Classification of course in study plans
- Programme DIT Doctoral 0 year of study, winter semester, compulsory-optional
- Programme DIT Doctoral 0 year of study, winter semester, compulsory-optional
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
- Programme DIT-EN Doctoral 0 year of study, winter semester, compulsory-optional
- Programme DIT-EN Doctoral 0 year of study, winter semester, compulsory-optional
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
Type of course unit
Lecture
Teacher / Lecturer
Syllabus
- Parallel computer architectures, performance measures and their prediction
- Patterns for parallel programming
- Synchronization and communication techniques.
- Shared variable programming with OpenMP
- Message-passing programming with MPI
- Data parallel programming with CUDA/OpenCL
- Examples of task parallelization and parallel applications
Guided consultation in combined form of studies
Teacher / Lecturer