Preparation
1. How to login to MeluXina machine¶
2. Use your username to connect to MeluXina¶
- 2.1 For example, the below example shows the user of
u100490
3. Once you have logged in¶
- 3.1 Once you have logged in, you will be in a default home directory
- 3.2 After that, go to the project directory.
4. And please create your own working folder under the project directory¶
- 4.1 For example, here is the user with
u100490
:
5. Now it is time to move into your home directory¶
- 5.1 For example, with user home directory
u100490
6. Now it is time to copy the folder which has examples and source files to your home directory¶
- 6.1 For example, with user home directory
u100490
[u100490@login03 u100490]$ cp -r /project/home/p200117/OpenACC . [u100490@login03 u100490]$ cd OpenACC/ [u100490@login03 OpenACC]$ pwd /project/home/p200117/u100490/OpenACC [u100490@login03 OpenACC]$ ls -lthr total 20K -rw-r-----. 1 u100490 p200117 51 Mar 13 15:50 module.sh drwxr-s---. 2 u100490 p200117 4.0K Mar 13 15:50 Vector-addition drwxr-s---. 2 u100490 p200117 4.0K Mar 13 15:50 Unified-memory ... ...
7. Until now, you are in the login node; now it is time to do the dry run test¶
-
7.1 Reserve the interactive node for running/testing OpenACC applications
check if your reservation is allocated
[u100490@login03 ~]$ salloc -A p200117 --res p200117-openacc-1 --partition=gpu --qos default -N 1 -t 01:00:00 salloc: Pending job allocation 296848 salloc: job 296848 queued and waiting for resources salloc: job 296848 has been allocated resources salloc: Granted job allocation 296848 salloc: Waiting for resource configuration salloc: Nodes mel2131 are ready for job
-
7.2 You can also check if you got the interactive node for your computations, for example, here with the user
u100490
:
8. Now we need to check the simple OpenACC application if that is going to work for you:¶
- 8.1 Go to folder
Dry-run-test
9. Finally, we need to load the compiler to test the GPU OpenACC codes¶
-
9.1 We need a Nvidia HPC SDK compiler for compiling and testing OpenACC code
check if the module is loaded properly
[u100490@mel2131 ~]$ module load OpenMPI/4.1.4-NVHPC-22.7-CUDA-11.7.0 [u100490@mel2131 ~]$ module list Currently Loaded Modules: 1) env/release/2022.1 (S) 6) numactl/2.0.14-GCCcore-11.3.0 11) libpciaccess/0.16-GCCcore-11.3.0 16) GDRCopy/2.3-GCCcore-11.3.0 21) knem/1.1.4.90-GCCcore-11.3.0 2) lxp-tools/myquota/0.3.1 (S) 7) CUDA/11.7.0 12) hwloc/2.7.1-GCCcore-11.3.0 17) UCX-CUDA/1.13.1-GCCcore-11.3.0-CUDA-11.7.0 22) OpenMPI/4.1.4-NVHPC-22.7-CUDA-11.7.0 3) GCCcore/11.3.0 8) NVHPC/22.7-CUDA-11.7.0 13) OpenSSL/1.1 18) libfabric/1.15.1-GCCcore-11.3.0 4) zlib/1.2.12-GCCcore-11.3.0 9) XZ/5.2.5-GCCcore-11.3.0 14) libevent/2.1.12-GCCcore-11.3.0 19) PMIx/4.2.2-GCCcore-11.3.0 5) binutils/2.38-GCCcore-11.3.0 10) libxml2/2.9.13-GCCcore-11.3.0 15) UCX/1.13.1-GCCcore-11.3.0 20) xpmem/2.6.5-36-GCCcore-11.3.0 Where: S: Module is Sticky, requires --force to unload or purge
10. Please compile and test your CUDA application¶
- 10.1 For example, Dry-run-test
11. Similarly, for the hands-on session, we need to do the node reservation:¶
-
10.1 For example, reservation
check if your reservation is allocated
[u100490@login03 ~]$ salloc -A p200117 --res p200117-openacc-2 --partition=gpu --qos default -N 1 -t 02:30:00 salloc: Pending job allocation 296848 salloc: job 296848 queued and waiting for resources salloc: job 296848 has been allocated resources salloc: Granted job allocation 296848 salloc: Waiting for resource configuration salloc: Nodes mel2131 are ready for job
12. We will continue with our Hands-on exercise¶
- 12.1 For example, in the
Hello World
example, we do the following steps:[u100490@mel2063 OpenACC]$ pwd /project/home/p200117/u100490/OpenACC [u100490@mel2063 OpenACC]$ ls [u100490@mel2063 OpenACC]$ ls Dry-run-test Matrix-multiplication Profiling Unified-memory Hello-world module.sh Vector-addition [u100490@mel2063 OpenACC]$ source module.sh [u100490@mel2063 OpenACC]$ cd Hello-world // compilation [u100490@mel2063 OpenACC]$ nvc -fast -acc=gpu -gpu=cc80 -Minfo=accel Hello-world.c -o Hello-World-GPU // execution [u100490@mel2063 OpenACC]$ ./Hello-World-GPU // output [u100490@mel2063 OpenACC]$ Hello World from GPU