Course detail
Applications of Parallel Computers
FIT-PDDAcad. year: 2018/2019
The course gives an overview of usable parallel platforms and models of programming, mainly shared-memory programming (OpenMP), message passing (MPI) and data-parallel programming (CUDA, OpenCL). A parallelization methodology is completed by performance studies and applied to a particular problem. The emphasis is put on practical aspects and implementation.
Language of instruction
Mode of study
Guarantor
Department
Learning outcomes of the course unit
Parallel architectures with distributed and shared memory, programming in C/C++ with MPI and OpenMP, GPGPU, parallelization of basic numerical methods.
Prerequisites
Co-requisites
Planned learning activities and teaching methods
Assesment methods and criteria linked to learning outcomes
Course curriculum
Work placements
Aims
Specification of controlled education, way of implementation and compensation for absences
Recommended optional programme components
Prerequisites and corequisites
- recommended prerequisite
Practical Parallel Programming
Basic literature
Recommended reading
Kirk, D., and Hwu, W.: Programming Massively Parallel Processors: A Hands-on Approach, Elsevier, 2010, s. 256, ISBN: 978-0-12-381472-2
Pacecho, P.: Introduction to Parallel Programming. Morgan Kaufman Publishers, 2011, 392 s., ISBN: 9780123742605
Classification of course in study plans
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
- Programme CSE-PHD-4 Doctoral
branch DVI4 , 0 year of study, winter semester, elective
Type of course unit
Lecture
Teacher / Lecturer
Syllabus
- Parallel computer architectures, performance measures and their prediction
- Patterns for parallel programming
- Synchronization and communication techniques.
- Shared variable programming with OpenMP
- Message-passing programming with MPI
- Data parallel programming with CUDA/OpenCL
- Examples of task parallelization and parallel applications
Guided consultation in combined form of studies
Teacher / Lecturer