HPC::Runner::Command::submit_jobs::Utils::Scheduler::Directives
Command Line Options
#TODO Move this over to docs
module
modules to load with slurm Should use the same names used in 'module load'
Example. R2 becomes 'module load R2'
conda_env
Anaconda envs to load
Load anaconda envs with
source activate /path/to/my/env
cpus_per_task
slurm item --cpus_per_task defaults to 1
ntasks
slurm item --ntasks defaults to 1
account
slurm item --account defaults to 1
account-per-node
slurm item --ntasks-per-node defaults to 28
commands_per_node
commands to run per node
nodes_count
Number of nodes to use on a job. This is only useful for mpi jobs.
PBS: #PBS -l nodes=nodes_count:ppn=16 this
Slurm: #SBATCH --nodes=nodes_count
partition
Specify the partition.
In PBS this is called 'queue'
walltime
Define scheduler walltime
mem
user
user running the script. Passed to slurm for mail information
procs
Total number of concurrent running tasks.
Analagous to parallel --jobs i
template_file
actual template file
One is generated here for you, but you can always supply your own with --template_file /path/to/template
Default is for SLURM