AMD CPU nodes
We currently host a set of newer AMD CPU nodes for benchmarking purposes.
|Genoa||96 core, 768 GB||mad08||cosma8-shm3|
|Genoa||96 core, 768 GB||mad09|
|Bergamo||128 cores, 2 sockets, 1.5 TB||mad10|
It is not possible to SSH straight into these machines. Access needs to be pre-booked using SLURM. Request a time allocation using the salloc command:
salloc -p cosma8-shm3 -w <NODE_ID> -A <ACCOUNT_GROUP> -t 01:00:00
And once the time is allocated get a bash session by executing the srun command:
srun -p cosma8-shm3 -A <ACCOUNT_GROUP> --pty /bin/bash
The nodes are available within the cosma8-shm3 partition and have to be selected specifically within your SLURM script:
#SBATCH -p cosma8-shm3 #SBATCH -w mad08
Alternatively, you can use the –include or –exclude settings to pick the exact node.
There are different software environments available which should work on the nodes. In principle, any code compiled on the Cosma8 login node should run directly on these testbeds. However, you might/should get better performance if you log into the nodes via ssh (please use salloc to book them before) and compile your code there.
The Intel toolchain on the AMD nodes works, but the compiler has to be told about the architecture explicitly:
-O3 -fomit-frame-pointer -fstrict-aliasing -ffast-math -funroll-loops -axCOMMON-AVX512 -march=x86-64-v4 -mavx512vbmi
Funding and acknowledgements
The AMD test nodes have been installed in collaboration and as addendum to DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). DiRAC equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.