Skip to content

Introduction to OpenMP Programming for Shared Memory Parallel Architecture

Participants from this course will learn Multicore (shared memory) CPU programming using the OpenMP programming model, such as parallel region, environmental routines, and data sharing. Furthermore, understanding the multicore shared memory architecture and how parallel threads blocks are used to parallelise the computational task. Since we deal with multicores and parallel threads, proper parallel work sharing and the synchronisation of the parallel calls are to be studied in detail. Finally, participants will also learn to use the OpenMP programming model to accelerate linear algebra (routines) and iterative solvers on the Multicore CPU. Participants will learn theories first and implement the OpenMP programming model with mentors' guidance later in the hands-on tutorial part.

Learning outcomes

After this course, participants will be able to:
  • Understanding the shared memory architecture
    • Unified Memory Access (UMA) and Non-Unified Memory Access (NUMA)
    • Hybrid distributed shared memory architecture
  • Implement OpenMP programming model
    • Parallel region
    • Environment routines
    • Data sharing
  • Efficient handling of OpenMP constructs
    • Work sharing
    • Synchronisation constructs
    • Single Instruction Multiple Data (SIMD) directive
  • Apply the OpenMP programming knowledge to parallelise examples from science and engineering:
    • Iterative solvers from science and engineering
    • Vector multiplication, vector addition, etc.

Prerequisites

Priority will be given to users with good experience with C/C++ and/or FORTRAN. No prior parallel programming experience is needed.

CPU Compute Resource

Participants attending the event will be given access to the MeluXina supercomputer during the session. To learn more about MeluXina, please consult the Meluxina overview and the MeluXina – Getting Started Guide.


Last update: March 7, 2024 15:06:19
Created: March 27, 2023 12:29:54