Scheduling jobs
Overview
Teaching: 45 min
Exercises: 30 minQuestions
What is a scheduler and why are they used?
How do I launch a program to run on any one node in the cluster?
How do I capture the output of a program that is run on a node in the cluster?
Objectives
Run a simple Hello World style program on the cluster.
Submit a simple Hello World style script to the cluster.
Use the batch system command line tools to monitor the execution of your job.
Inspect the output and error files of your jobs.
Job scheduler
An HPC system might have thousands of nodes and thousands of users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.
The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job do not start instantly as in your laptop.
Job scheduling roleplay (optional)
Your instructor will divide you into groups taking on different roles in the cluster (users, compute nodes and the scheduler). Follow their instructions as they lead you through this exercise. You will be emulating how a job scheduling system works on the cluster.
The scheduler used in this lesson is SLURM. Although SLURM is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.
Running a batch job
The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.
In this case, the job we want to run is just a shell script. Let’s create a demo shell script to run as a test.
Creating our test job
Using your favorite text editor, create the following script and run it. Does it run on the cluster or just our login node?
#!/bin/bash
#SBATCH --account=nn9988k --mem-per-cpu=2G --time=10:00
echo 'This script is running on:'
hostname
sleep 120
If you completed the previous challenge successfully, you probably realise that there is a
distinction between running the job through the scheduler and just “running it”. To submit this job
to the scheduler, we use the sbatch
command.
[[yourUsername@login-2.SAGA ~]$ sbatch example-job.sh
Submitted batch job 36855
And that’s all we need to do to submit a job. Our work is done – now the scheduler takes over and
tries to run the job for us. While the job is waiting to run, it goes into a list of jobs called
the queue. To check on our job’s status, we check the queue using the command
squeue -u $USER
.
[yourUsername@login-2.SAGA ~]$ squeue -u $USER
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
546187 normal test.slu sabryr R 0:50 1 c5-59
We can see all the details of our job, most importantly that it is in the “R” or “RUNNING” state.
Sometimes our jobs might need to wait in a queue (“PENDING”) or have an error. The best way to check
our job’s status is with squeue
. Of course, running squeue
repeatedly to check on things can be
a little tiresome. To see a real-time view of our jobs, we can use the watch
command. watch
reruns a given command at 2-second intervals. This is too frequent, and will likely upset your system
administrator. You can change the interval to a more reasonable value, for example 60 seconds, with the
-n 60
parameter. Let’s try using it to monitor another job.
[yourUsername@login-2.SAGA ~]$ sbatch example-job.sh
[yourUsername@login-2.SAGA ~]$ watch -n 60 squeue -u $USER
You should see an auto-updating display of your job’s status. When it finishes, it will disappear
from the queue. Press Ctrl-C
when you want to stop the watch
command.
Customising a job
The job we just ran used all of the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.
Comments in UNIX (denoted by #
) are typically ignored. But there are exceptions. For instance the
special #!
comment at the beginning of scripts specifies what program should be used to run it
(typically /bin/bash
). Schedulers like also have a special comment
used to denote special scheduler-specific options. Though these comments differ from scheduler to
scheduler, ‘s special comment is #SBATCH
.
Anything following the #SBATCH
comment is interpreted as an
instruction to the scheduler.
Let’s illustrate this by example. By default, a job’s name is the name of the script, but the
--job-name
option can be used to change the name of a job.
Submit the following job (sbatch example-job.sh
):
#!/bin/bash
#SBATCH --account=nn9988k --mem-per-cpu=2G --time=10:00
#SBATCH --job-name new_name
echo 'This script is running on:'
hostname
sleep 120
Fantastic, we’ve successfully changed the name of our job!
Setting up email notifications
Jobs on an HPC system might run for days or even weeks. We probably have better things to do than constantly check on the status of our job with
squeue
. Looking at the man page forsbatch
, can you set up our test job to send you an email when it finishes? For help:Manual: https://slurm.schedmd.com/sbatch.html
Resource requests
But what about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what we want.
The following are several key resource requests:
--nodes <number of nodes>
- how many nodes does your job need?
- (e.g. –nodes=1)
--cpus-per-task <number of cpus>
- How many CPUs does each task need?
- (e.g –cpus-per-task=4)
--ntasks <number of cpus>
- How many (max) cpu cores to be allocated
- (e.g. –ntasks=8)
--partition <partition name>
- The partition to be used,
- possible values normal,bigmem,accel,optimist,devel(e.g. (–partition=bigmem)
- https://documentation.sigma2.no/jobs/job_types.html
--mem-per-cpu <memory in megabytes>
- How much memory for each cpu cores to be allocated
- (e.g. –mem-per-cpu=4096 or –mem-per-cpu=4G)
--mem=<memory in megabytes>
- How much memory on a node does your job need in megabytes by default or gigabytes if G is used? You can also
specify gigabytes using by adding a little “g” afterwards (example:
--mem=5G
) - (e.g. –mem=4096 or –mem=4G)
- How much memory on a node does your job need in megabytes by default or gigabytes if G is used? You can also
specify gigabytes using by adding a little “g” afterwards (example:
--time <days-hours:minutes:seconds>
- How much real-world time (walltime) will your job take to
- run? The
<days>
part can be omitted. - (e.g. time=1-01:01:01 # for one day , one hour , one minute, one secound)
Note that just requesting these resources does not make your job run faster! We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.
Submitting resource requests
Submit a job that will use 1 full node and 5 minutes of walltime.
Job environment variables
When SLURM runs a job, it sets a number of environment variables for the job. One of these will let us check our work from the last problem. The
SLURM_CPUS_PER_TASK
variable is set to the number of CPUs we requested with--cpus-per-task
. Using theSLURM_CPUS_PER_TASK
variable, modify your job so that it prints how many CPUs have been allocated.
Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use walltime as an example. We will request 30 seconds of walltime, and attempt to run a job for two minutes.
#!/bin/bash
#SBATCH --account=nn9988k --mem-per-cpu=2G
#SBATCH --time=00:30
#SBATCH --job-name new_name
echo 'This script is running on:'
hostname
sleep 120
Submit the job and wait for it to finish. Once it is has finished, check the log file.
[yourUsername@login-2.SAGA ~]$ sbatch example-job.sh
[yourUsername@login-2.SAGA ~]$ watch -n 60 squeue -u $USER
[yourUsername@login-2.SAGA ~]$ cat slurm-38193.out
This job is running on:
gra533
slurmstepd: error: *** JOB 38193 ON gra533 CANCELLED AT 2017-07-02T16:35:48 DUE TO TIME LIMIT ***
Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all of the cores or memory on a node, SLURM will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.
Cancelling a job
Sometimes we’ll make a mistake and need to cancel a job. This can be done with the scancel
command. Let’s submit a job and then cancel it using its job number (remember to change the
walltime so that it runs long enough for you to cancel it before it is killed!).
[yourUsername@login-2.SAGA ~]$ sbatch example-job.sh
[yourUsername@login-2.SAGA ~]$ squeue -u $USER
Submitted batch job 38759
JOBID USER ACCOUNT NAME ST REASON START_TIME TIME TIME_LEFT NODES CPUS
38759 yourUsername yourAccount example-job.sh PD Priority N/A 0:00 1:00 1 1
Now cancel the job with it’s job number. Absence of any job info indicates that the job has been successfully cancelled.
[yourUsername@login-2.SAGA ~]$ scancel 38759
... Note that it might take a minute for the job to disappear from the queue ...
[yourUsername@login-2.SAGA ~]$ squeue -u $USER
JOBID USER ACCOUNT NAME ST REASON START_TIME TIME TIME_LEFT NODES CPUS
Cancelling multiple jobs
We can also all of our jobs at once using the
-u
option. This will delete all jobs for a specific user (in this case us). Note that you can only delete your own jobs.Try submitting multiple jobs and then cancelling them all with
scancel -u yourUsername
.
Other types of jobs
Up to this point, we’ve focused on running jobs in batch mode. SLURM also provides the ability to start an interactive session.
There are very frequently tasks that need to be done interactively. Creating an entire job
script might be overkill, but the amount of resources required is too much for a login node to
handle. A good example of this might be building a genome index for alignment with a tool like
HISAT2. Fortunately, we can run these types of
tasks as a one-off with srun
.
srun
runs a single command on the cluster and then exits. Let’s demonstrate this by running the
hostname
command with srun
. (We can cancel an srun
job with Ctrl-c
.)
[yourUsername@login-2.SAGA ~]$ srun --account=nn9988k --mem-per-cpu=1G --time=10:00 hostname
c1-1
srun
accepts all of the same options as sbatch
. However, instead of specifying these in a
script, these options are specified on the command-line when starting a job. To submit a job that
uses 2 CPUs for instance, we could use the following command:
[yourUsername@login-2.SAGA ~]$ srun --account=nn9988k --mem-per-cpu=1G --time=1:00 --cpus-per-task=2 echo "This job will use 2 CPUs."
This job will use 2 CPUs.
Typically, the resulting shell environment will be the same as that for sbatch
.
Interactive jobs
Sometimes, you will need access to compute nodes for interactive use. Perhaps it’s our first time running
an analysis or we are attempting to debug something that went wrong with a previous job.
Fortunately, SLURM makes it easy to start an interactive job with srun
:
[yourUsername@login-2.SAGA ~]$ srun --account=nn9988k --mem-per-cpu=1G --time=30:00 --cpus-per-task=2 --pty bash
You should be presented with a bash prompt. Note that the prompt will likely change to reflect your
new location, in this case the compute node we are logged on. You can also verify this with
hostname
.
Creating remote graphics
To see graphical output inside your jobs, you need to use X11 forwarding. To connect with this feature enabled, use the
-Y
option when you login withssh
with the commandssh -Y username@host
.To demonstrate what happens when you create a graphics window on the remote node, use the
xeyes
command. A relatively adorable pair of eyes should pop up (pressCtrl-c
to stop). If you are using a Mac, you must have installed XQuartz (and restarted your computer) for this to work.If your cluster has the slurm-spank-x11 plugin installed, you can ensure X11 forwarding within interactive jobs by using the
--x11
option forsrun
with the commandsrun --x11 --pty bash
.
When you are done with the interactive job, type exit
to quit your session.
Key Points
The scheduler handles how compute resources are shared between users.
Everything you do should be run through the scheduler.
A job is just a shell script.
If in doubt, request more resources than you will need.