Skip to content

Commit

Permalink
Updates example script. Closes #613
Browse files Browse the repository at this point in the history
  • Loading branch information
aturner-epcc authored Sep 12, 2024
1 parent bbb1676 commit 287613d
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions docs/user-guide/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -547,9 +547,6 @@ on the compute node architecture.
#SBATCH --partition=gpu
#SBATCH --qos=gpu-exc
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
# Check assigned GPU
srun --ntasks=1 rocm-smi
Expand All @@ -559,6 +556,9 @@ srun --ntasks=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
xthi
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
srun --ntasks=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
./my_gpu_program.x
Expand Down Expand Up @@ -592,31 +592,31 @@ on the compute node architecture.
#SBATCH --partition=gpu
#SBATCH --qos=gpu-exc
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
# Check assigned GPU
nodelist=$(scontrol show hostname $SLURM_JOB_NODELIST)
for nodeid in $nodelist
do
echo $nodeid
srun --ntasks=1 --nodelist=$nodeid rocm-smi
srun --ntasks=1 --gpus=4 --nodes=1 --ntasks-per-node=1 --nodelist=$nodeid rocm-smi
done
# Check process/thread pinning
module load xthi
srun --ntasks=8 --cpus-per-task=8 \
srun --ntasks-per-node=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
xthi
srun --ntasks=8 --cpus-per-task=8 \
# Enable GPU-aware MPI
export MPICH_GPU_SUPPORT_ENABLED=1
srun --ntasks-per-node=4 --cpus-per-task=8 \
--hint=nomultithread --distribution=block:block \
./my_gpu_program.x
```

!!! note
When you use the `--qos=gpu-exc` QoS you must also add the `--exclusive` flag
and then specify the number of nodes you want with `--nodes=1`.
and then specify the number of nodes you want with, for example, `--nodes=2`.

### Interactive jobs

Expand Down

0 comments on commit 287613d

Please sign in to comment.