Skip to content

Commit

Permalink
Merge pull request #627 from ARCHER2-HPC/aturner-epcc/gpu-fix-613
Browse files Browse the repository at this point in the history
Updates example script. Closes #613
  • Loading branch information
juanfrh authored Oct 8, 2024
2 parents a821cd0 + 287613d commit 5151ced
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions docs/user-guide/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -547,9 +547,6 @@ on the compute node architecture.
#SBATCH --partition=gpu
#SBATCH --qos=gpu-exc
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
# Check assigned GPU
srun --ntasks=1 rocm-smi
Expand All @@ -559,6 +556,9 @@ srun --ntasks=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
xthi
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
srun --ntasks=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
./my_gpu_program.x
Expand Down Expand Up @@ -592,31 +592,31 @@ on the compute node architecture.
#SBATCH --partition=gpu
#SBATCH --qos=gpu-exc
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
# Check assigned GPU
nodelist=$(scontrol show hostname $SLURM_JOB_NODELIST)
for nodeid in $nodelist
do
echo $nodeid
srun --ntasks=1 --nodelist=$nodeid rocm-smi
srun --ntasks=1 --gpus=4 --nodes=1 --ntasks-per-node=1 --nodelist=$nodeid rocm-smi
done
# Check process/thread pinning
module load xthi
srun --ntasks=8 --cpus-per-task=8 \
srun --ntasks-per-node=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
xthi
srun --ntasks=8 --cpus-per-task=8 \
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
srun --ntasks-per-node=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
./my_gpu_program.x
```

!!! note
When you use the `--qos=gpu-exc` QoS you must also add the `--exclusive` flag
and then specify the number of nodes you want with `--nodes=1`.
and then specify the number of nodes you want with, for example, `--nodes=2`.

### Interactive jobs

Expand Down

0 comments on commit 5151ced

Please sign in to comment.