Partycje / kolejki
In the Slurm queuing system, partitions (which are the functional equivalent of queues in other systems) are mainly used to manage access priorities to resources and allow us to settle projects in an easy way - physically nodes can be shared between different partitions. At the moment we have the following most important partitions on Eagle system: fast, standard, bigmem oraz tesla.
Partitionstandard
- includes all Eagle cluster nodes
- maximum time to complete the job is 7 days / 168 hours
- This is a partition that should be used virtually always, except for test tasks and tasks that require more than 128 GB of memory per node.
Partition fast:
- includes basically all physical nodes of the eagle cluster
- maximum time to complete the job is 1 hour
- has a higher priority over the standard queue
- This partition is designed for software testing, so high priority and relatively short time limit
Partitionbigmem
- includes a small subset of nodes - only those equipped with 256GB memory. Such nodes are 5% of the total number of nodes on the Eagle.
- The maximum time to complete the job is, as in the case of partitions, 7 days/168 hours standard
- has a higher priority over the standard queue
- This partition should only be used for tasks that require more than 128 GB of memory per node - such tasks will be preferred for accessing nodes with large amounts of memory. If your tasks do not need such nodes, there is no point in outsourcing tasks to this partition. Despite the increased priority, and due to the small number of nodes in the partition, they will most likely run later than in the 'standard' partition.
Of course, it may happen that tasks assigned to the "standard" partition will be run on nodes with 256 GB of memory. This can happen if all other physical servers (with 64/128GB RAM) are used. However, after releasing these nodes, the tasks assigned to the "bigmem" partition will have priority access to these resources.
Partition tesla
This partition contains nodes equipped with GPU cards. Each node has 2 Nvidia V100 cards equipped with 32GB memory. For information on how to outsource tasks to GPU cards, see this page
There are also several other partitions on the Eagle system, but although it is possible to assign tasks to them, please do not use these partitions. They do not give access to additional resources, we use them for billing projects such as Pl-Grid, for hardware and software testing or for training purposes.