Course detail
Parallel System Architecture and Programming
FIT-ARCAcad. year: 2017/2018
The course covers architecture and programming of parallel systems with functional- and data-parallelism. First the parallel system theory and program parallelization are discussed. Programming for shared memory systems in OpenMP follows and then the most proliferated multi-core multiprocessors (SMP) and the advanced DSM NUMA systems are described. The course goes on in message passing programming in standardized interface MPI. Interconnection networks are dealt with separately and then their role in clusters, many-core chips and in the most powerful systems is revealed.
Language of instruction
Number of ECTS credits
Mode of study
Guarantor
Department
Learning outcomes of the course unit
Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.
Prerequisites
Co-requisites
Planned learning activities and teaching methods
Assesment methods and criteria linked to learning outcomes
Course curriculum
- Syllabus of lectures:
- Introduction to parallel processing.
- Patterns for parallel programming.
- Shared memory programming - Introduction into OpenMP.
- Synchronization and performance awareness in OpenMP.
- Shared memory and cache coherency.
- Components of symmetrical multiprocessors.
- CC NUMA DSM architectures.
- Message passing interface.
- Collective communications, communicators, and disk operations.
- Interconnection networks: topology and routing algorithms.
- Interconnection networks: switching, flow control, message processing and performance.
- Message-passing architectures, current supercomputer systems. Distributed file systems.
- Data-parallel architectures and programming.
- Anselm and Salomon supercomputer intro
- OpenMP: Loops and sections
- OpenMP: Tasks and synchronization
- MPI: Point-to-point communications
- MPI: Collective communications
- MPI: I/O, debuggers, profilers and traces
- Development of an application on SMP in OpenMP on a NUMA node.
- A parallel program in MPI on the supercomputer.
Syllabus of computer exercises:
Syllabus - others, projects and individual work of students:
Work placements
Aims
Specification of controlled education, way of implementation and compensation for absences
- Missed labs can be substituted in alternative dates (monday or friday)
- There will be a place for missed labs in the last week of the semester.
Recommended optional programme components
Prerequisites and corequisites
Basic literature
Recommended reading
Classification of course in study plans
- Programme IT-MSC-2 Master's
branch MBI , 0 year of study, summer semester, compulsory-optional
branch MSK , 1 year of study, summer semester, compulsory
branch MMM , 0 year of study, summer semester, elective
branch MBS , 0 year of study, summer semester, elective
branch MPV , 1 year of study, summer semester, compulsory
branch MIS , 0 year of study, summer semester, elective
branch MIN , 0 year of study, summer semester, elective
branch MGM , 0 year of study, summer semester, compulsory-optional
Type of course unit
Lecture
Teacher / Lecturer
Syllabus
- Introduction to parallel processing.
- Patterns for parallel programming.
- Shared memory programming - Introduction into OpenMP.
- Synchronization and performance awareness in OpenMP.
- Shared memory and cache coherency.
- Components of symmetrical multiprocessors.
- CC NUMA DSM architectures.
- Message passing interface.
- Collective communications, communicators, and disk operations.
- Interconnection networks: topology and routing algorithms.
- Interconnection networks: switching, flow control, message processing and performance.
- Message-passing architectures, current supercomputer systems. Distributed file systems.
- Data-parallel architectures and programming.
Exercise in computer lab
Teacher / Lecturer
Syllabus
- Anselm and Salomon supercomputer intro
- OpenMP: Loops and sections
- OpenMP: Tasks and synchronization
- MPI: Point-to-point communications
- MPI: Collective communications
- MPI: I/O, debuggers, profilers and traces