Where and how to use slk

file version: 08 July 2021

The slk might use up the 4 GB memory and considerable CPU time. Therefore, large slk archive and slk retrieve jobs should be performed on nodes with sufficient amount of memory. Using shared nodes might slow down slk considerably. If several slk archive or retrieve are run on one node by one user, they might use up the whole memory available to the user leading to random kills of the running slks.

The table below provides rough suggestions on how many slks should be run on one node. Using more slks might lead to instabilities of the slk commands. If you experiance that the limits are stricter then listed here, please contact us.

reasonable number of parallel slk archive and slk retrieve commands on different node types

node/partition

#slk per node and user

comment

mlogin

1

archival/retrieval of small files only

mistralpp

1

shared

see comment slow

1 slk per job and user

compute

10

compute, large memory

20

compute, hugh memory

40

compute2

10

compute2, large memory

20

compuate2, hugh memory

40

prepost

see comment

1 slk per job and user

How to use slk?

You can use slk non-interactively in batch jobs (e.g. on compute) or interactivly (e.g. on mistralpp or salloc on compute node).

If your run slk in batch jobs, some slk commands will not print any textual output (e.g. slk archive and slk retrieve). Therefore, please evaluate the exit codes of all slk commands used in a script in order to find out whether they terminated successfully or failed. You get the exit code of the previous shell command via $?.

If you want to run slk on large or hugh memory compute nodes, please set --mem=... according to your needs.