-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathjob
35 lines (32 loc) · 1.36 KB
/
job
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# LSBATCH: User input
#!/bin/bash
# V100 GPUs with up to 32GB of memory each
# The queue that we want to submit our job to.
# In this case it is the V100 GPUs, but there's also equvalent
# queues for A100, such as gpua100.
#BSUB -q gpuv100
# The name of the job. Will e.g. be shown using the `bstat` command.
#BSUB -J p_m_s_n
# The number of cores that we want to allocate, this is equavalent
# to the total number of cores for the whole job, so if we wish to
# run 32 threads on 4 nodes, this number need to be 128.
#BSUB -n 4
# Specifics on the GPU allocation.
# In this case one GPU with exclusive access.
# The `num` can be changed up to the total number of GPUs on the node.
#BSUB -gpu "num=1:mode=exclusive_process"
# Total walltime of the program, How much time do we expect it to run.
# When this time has passed the program WILL be killed if it is not
# comlete.
#BSUB -W 01:00
# How much memory do we want to allocate per core allocated.
# In this case we will have 4x16GB as we have 4 cores
#BSUB -R "rusage[mem=16GB]"
# How do we want the cores distributed.
# `hosts=1` mean that we want all cores on the same node.
#BSUB -R "span[hosts=1]"
# stdout and stderr files
#BSUB -o "out/jobs/mobility_timesteps_nodet_%J.out"
#BSUB -e "out/jobs/mobility_timesteps_nodet_%J.err"
mkdir -p out/jobs
./out/mobility_timesteps_nodet bench > out/jobs/mobility_timesteps_nodet_${LSB_JOBID}.log