Scheduling Jobs
klone
uses the Slurm job scheduler. When you first ssh into klone
you land on one of the two login nodes (e.g., klone-login01
). Login nodes are shared amongst all users to transfer data, navigate the file system, and request resource slices to perform heavy duty computing. You should never use login nodes for heavy computing and automated mechanisms exist to monitor and enforce violations. The tool used to notify users of violations is "arbiter2" and you will receive an email for each offending process (Gardner, Migacz, and Haymore 2019).
To keep the login node in stable working order and ensure fair usage of the login node as a community resource, Hyak has a job scheduling software that will give you access to other nodes (i.e., different computers that are part of the klone
cluster). The job scheduler software is called Slurm, and regular users of Hyak need to learn how to use Slurm to effectively and efficiently make use of Hyak as a resource for research computing.
Check out our tutorial Focused on Slurm
If you are new to Hyak and using the job scheduler, Slurm, you may find our Slurm tutorial helpful to walk you thorugh basic and advanced usage. Click here to jump to the tutorial.
#
Compute ResourcesThe Slurm scheduler has two high-level concepts you need to know, accounts and partitions.
#
AccountsWith the hyakalloc
command [source code here] you can further see not only which accounts you are able to submit jobs to but also their current utilization. Resource limits are directly proportional to what was contributed by that group.
#
PartitionsWhile you won't necessarily have access to them, it might be useful for you to see a list of Hyak's partitions. The sinfo
commands contains information about the servers or nodes that compose Hyak, and the sinfo -s
commands give you a summary for this information including the partitions and the hostnames that fall into each partition.
Each partition represents a class of node from the standard compute
partition to those with high-memory or for different types of GPUs.
#
Job TypesThere are a few popular types of jobs you could submit:
- interactive where you and test out your workflows live,
- batch which are unattended (you get an email when completed), and
- recurring or "CRON-like" processes that happen on a regular basis.
#
Slurm ArgumentsThese are the common and recommended arguments suggested at a minimum to get a job in any form.
important
If you are using an interactive node to run a parallel application such as Python multiprocessing, MPI, OpenMP, etc. then the number given for the --ntasks-per-node
option must match the number of processes used by your application.
Arguments | Command Flags | Notes |
---|---|---|
Account | -A or --account | What lab are you part of? If you run the groups command you can see what groups (usually labs) you're a member of, these are associated with resource limits on the cluster. See the accounts section for additional information. |
Partition | -p or --partition | What resource partition are you interested in using? This could be anything you see when you run sinfo -s as each partition corresponds to a class of nodes (e.g., high memory, GPU). See the partitions section for additional information. |
Nodes | -N or --nodes | How many nodes are these resources spread across? In the overwhelming number of cases this is 1 (for a single node) but more sophisticated multi-node jobs could be run if your code supports it. |
Cores | -c or --cpus-per-task | How many compute cores do you need? Not all codes can make use of multiple cores and if they do, the performance of the code is not always linear with the resources requested. If in doubt consider contacting the research computing team to assist in this optimization. |
Memory | --mem | How much memory do you need for this job? This is in the format size[units] were size is a number and units are either M , G , or T for megabyte, gigabyte, and terabyte respectively. Megabyte is the default unit if none is provided. |
Time | -t or --time | What's the maximum runtime for this job? Common acceptable time formats include hours:minutes:seconds , days-hours , and minutes . |
#
Interactive Jobs (Single Node)Resources for interactive jobs are attained either using salloc
. To request a compute node from the Checkpoint all partition (ckpt-all
) interactively consider the example below.
In this case you are requesting a slice of the standard compute node class that your group mylab
contributed to the cluster. You are asking for 4 compute cores with 10GB of memory for 2 hours and 30 minutes spread across 1 node (single machine). The salloc
command will automatically create an interactive shell session on an allocated node.
#
Interactive Jobs (Multi Node)Building upon the previous section, if -N
or --nodes
is >1 when running salloc
you are automatically placed into a shell of one of the allocated nodes. This shell is NOT part of a Slurm task. To view the names of the remainder of your allocated nodes use scontrol show hostnames
. The srun
command can be used to execute a command on all of the allocated nodes as shown in the example session below.
#
Interactive Node PartitionsIf your group has an interactive node, use the option -p <partition_name>-int
like below. If you are unsure if your group has an interactive node you can run hyakalloc
and it will appear if you have one.
note
- If you are not allocated a session with the specified
--mem
value, try smaller memory values
For more details, read the salloc
man page.
#
Slurm Environment VariablesWhen a job scheduled by Slurm begins, it needs to about how it was scheduled, what its working directory is, who submitted the job, the number of nodes and cores allocated to it, etc. This information is passed to Slurm via environment variables. Additionally, these environment variables are also used as default values by programs like mpirun
. To view a node's Slurm environment variables, use export | grep SLURM
.
A comprehensive list of the environment variables Slurm sets for each job can be found at the end of the sbatch
man page.
#
Batch Jobs#
Single Node Batch JobsBelow is a slurm script template. Submit a batch job from the klone
login node by calling sbatch <script_name>.slurm
.
#
Multiple Node Batch JobsIf your batch job is using multiple nodes, your program should also know how to use all the nodes (e.g. your program is an MPI program).
The value given for --nodes
should be less than or equal to the total number of nodes owned by your group unless you are running in the ckpt
partition.
The value given for --ntasks-per-node
can be up to the number of CPUs your group has available, and CPUs exceeding the groups resources can be requested per job using the checkpoint partitions (ckpt
, ckpt-all
, or ckpt-g2
). The hyakalloc
command can be used to see the number of CPUs or GPUs that can be requested under your account/s. In the case you want to use an entire node, the number of CPUs or cores per node varies based on the hardware model, but some common partitions are the compute
partition which have 40
cores and the cpu-g2
and cpu-g2-mem2x
partitions which have 192
cores. For example, the below would request 4 complete nodes from a compute
partition.
#
Common Slurm Error Messagesslurmstepd: error: Exceeded job memory limit
: your program uses more memory than you allotted during node creation and it has run out of memory. Get a node with more memory and try again.(ReqNodeNotAvail, UnavailableNodes:n[<node numbers list>]
: your node will not expire (and might be running one of your jobs) before the next scheduled maintenance day. Either get a node with a shorter--time
duration or wait until after the maintenance has been completed.Unable to allocate resources: Invalid account or account/partition combination specified
: you used-p <group_name> -A <group_name>
and you do not belong to that group.
#
Utility CommandsWith <net_id>
as your UW NetID and <group_name>
as your Hyak group partition name, and <job_id>
as an individual job ID:
sinfo
is used to view information aboutklone
nodes and partitions. Usesinfo -p <group_name>
to view information about your group's partition or allocation. Usesinfo -s
to see a list of all partitions.squeue
is used to view information about jobs located in the scheduling queue. Usesqueue -p <group_name>
to view information about your group's nodes. Usesqueue -u <net_id>
to view your jobs.scancel
is used to cancel jobs. Usescancel <job_id>
to cancel a job with the given job ID, or usescancel -u <net_id>
to cancel all of your jobs.sstat
displays status information of a running job pertaining to CPU, Task, Node, Resident Set Size (RSS), and Virtual Memory (VM) statistics. Read the man page for a comprehensive list of format options.sacct
displays information about completed jobs. Read the man page for a comprehensive list of format options.sreport
generates reports about job usage and cluster utilization from Slurm accounting (sacct
) data. For example, to get historical usage the group<group_name>
in March 2020, usesreport cluster UserUtilizationByAccount Start=2020-03-01 End=2020-03-31 Accounts=<group_name>
.
#
Man PagesAll of these man pages can also be viewed on klone
by running man <command>
. Exit the man
command with q
.
#
ReferencesGardner, Dylan, Robben Migacz, and Brian Haymore. "Arbiter: Dynamically Limiting Resource Consumption on Login Nodes." Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (learning). 2019. 1-7. [DOI: 10.1145/3332186.3333043] [Code: Gitlab]