Star-HPC-Cluster-logo
Star HPC Cluster
SLURM Job Script Generator

This is the name you can find your job by in the queue list when you use the 'squeue' command.

Set the file path and its name to save standard output. You can use the job ID (%j) for automatic naming.

Set the file path and its name to save error messages from the job. It also can use the job ID (%j) for automatic naming.

Your job would be forcefully terminated after it hits this time limit. The timer starts after the job exits the queue and executes.

Choose the number of compute nodes. Options are 'cn01' with two A30 GPUs, 'gpu1' and 'gpu2' with eight A100 GPUs each.

Allocate CPU cores for computation. All nodes have 64 cores each.

Determine the memory allocation for your job. The 'cn01' node has 256GB of memory; 'gpu1' and 'gpu2' have []GB of memory each.

Test jobs have higher priority but have some constraint like lower time limit.

Preemptable jobs are given higher execution priority but can be preempted (stopped) at any time by higher priority jobs if more resource is required. Preemptable jobs are useful for jobs that can handle premature termination and restart at a checkpoint via checkpointing.

Define how many tasks to run per CPU core. Useful for multithreading applications.

Set the number of tasks to run on each node. This depends on the number of CPUs and memory allocated.

Allocate CPUs per task for parallel processing. All nodes support up to 64 tasks concurrently.

Load any necessary modules for the job. Modules add specific software or environment settings.

Enter the command line calls for your job's execution, such as running python scripts or applications.