Skip to content

Introduction to OpenACC for Heterogeneous Computing

Participants from this course will learn GPU programming using the OpenACC programming model, such as compute constructs, loop constructs and data clauses. Furthermore, understanding the GPU architecture and how parallel threads blocks are created and used to parallelise the computational task. Moreover, GPU is an accelerator; hence, there must be a good understanding of memory management between the GPU and CPU, which will also be discussed in detail. Finally, participants will also learn to use the OpenACC programming model to accelerate linear algebra (routines) and iterative solvers on the GPU. Participants will learn theories first and implement the OpenACC programming model with mentors' guidance later in the hands-on tutorial part.

Learning outcomes

After this course, participants will be able to:
  • Understanding the GPU architecture (and also the difference between GPU and CPU)
    • Streaming architecture
    • Threads blocks
  • Implement the OpenACC programming model
    • Compute constructs
    • Loop constructs
    • Data clauses
  • Efficient handling of memory management
    • Host to Device
    • Unified memory
  • Apply the OpenACC programming knowledge to accelerate examples from science and engineering:
    • Iterative solvers from science and engineering
    • Vector multiplication, vector addition, etc.


Priority will be given to users with good experience with C/C++ and/or FORTRAN. No GPU programming knowledge is required; however, knowing the OpenMP programming model is advantageous.

GPU Compute Resource

Participants attending the event will be given access to the MeluXina supercomputer during the session. To learn more about MeluXina, please consult the Meluxina overview and the MeluXina – Getting Started Guide.

Last update: January 31, 2024 09:18:25
Created: March 27, 2023 12:29:54