Job submission via SLURM

Submitting jobs


Before to job submition the appropriate command launching the application has to be embedded in the script to be correctly read by the queueing system:

/home/users/user/submit_script.sh


Example:

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --mem=4gb
#SBATCH --time=01:00:00

# We set up paths or load appropriate modules
module load plink/1.90

# Set the $TMPDIR variable
export TMPDIR=$HOME/grant_$SLURM_JOB_ACCOUNT/scratch/$USER/$SLURM_JOB_ID

# We set application variables
export SCR=${TMPDIR}

# We set the auxiliary variables
INPUT_DIR="input"
OUTPUT_DIR="output"
OUTPUT_FILE="OUTPUT"

# We create a temporary directory
mkdir -p ${TMPDIR}

# Copy the input data to the directory indicated for the variable $TMPDIR
cp ${SLURM_SUBMIT_DIR}/${INPUT_DIR}/* ${TMPDIR}

# We go to the $TMPDIR directory
cd $TMPDIR

# We make calculations
plink --noweb --file hapmap1

# We finish calculations, copy the contents of the $TMPDIR/output directory 
# to the directory from which the task was sealed.
mkdir $SLURM_SUBMIT_DIR/${OUTPUT_DIR}
cp -r $TMPDIR/* $SLURM_SUBMIT_DIR/${OUTPUT_DIR}/

# We're cleaning the working directory
rm -rf $TMPDIR

Example input:
Below are the files that should be in the input directory.
File:Plink input.zip

Job can be submitted using sbatch command

sbatch /home/users/user/submit_script.sh

Submitting interactive jobs

Interactive jobs can work in two modes:

  • text mode
  • graphic mode (with output redirection to X)

Interactive jobs in text mode


In this mode interactive job can be submitted by executing the following command:


srun --pty /bin/bash


or

srun -u /bin/bash -i

First command allocates pseudo terminal making the work on a remote console easier. In case of any problems please use the second command.

Interactive jobs in graphic mode


From the user point of view it is sufficient to log in to the cluster with -X option:

ssh -X eagle.man.poznan.pl

NOTE: On Windows you must have an X server installed, e.g. Xming, and in the Putty program activate X11 redirection

When you log on to the machine with X, you should have an interactive task

srun --x11 -n28 --pty /bin/bash 

Then run the sample program (displays the queue state on the eagle machine)

sview & 

Job submission using GPU cards

To submit a job to node(s) equipped with GPU cards it is required to use tesla partition and add the following section to submission script:

#SBATCH --gres=gpu:<no. of cards for every task>

Exemplary job using 2 cards should contain the following section:

#SBATCH --gres=gpu:2
#SBATCH --partition=tesla

Selected number of applications are enabled to use GPUs either by built-in functionality or using dedicated module, usually containing "CUDA" in its name, e.g.:

namd/2.10-ibverbs-smp-cuda    <- GPU supported version
namd/2.10-multicore(default)
namd/2.10-multicore-cuda      <- GPU supported version
namd/2.10-multicore-mic
namd/2.12-ibverbs
namd/2.12-ibverbs-smp
namd/2.12-ibverbs-smp-CUDA    <- wGPU supported version

Checking status of the queue


To check what jobs have been submitted by given user please execute the command:

squeue -u username

View a list of tasks in a given partition

squeue -p standard

View information about a specific task

scontrol show job job_id

Removing jobs


To remove the job please use scancel command taking job id as a parameter. Both, waiting and running jobs can be removed.

scancel job_id