The HPC is currently being used by researchers in computational chemistry, analysis of DNA, bioinformatics, and computational fluid dynamics (CFD).
Users have access to 2 types of disk area.
- Their home directory, where they can keep persistent data such as job definitions, source files for jobs, and scripts.
- A shared research groups project area. This is your space for work in progress and results storage. It can also be used for analysis carried out on the HPC system, to speed up the iterative workflows of researchers.
The HPC comprises 6 types of nodes, 3 from the newly installed cluster and 3 legacy node types. The new nodes are all connected via InfiniBand with 200Gb/s transfer speeds, with Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz with 64 CPU cores and the following memory and additional hardware configurations:
- Standard compute: 256GB Memory.
- High memory compute: 1TB Memory.
- GPU: 512GB Memory and 2x Nvidia A100 40GB HMB2e Memory
The legacy nodes all connected via Omni-Path capable of 100Gb/s transfer speeds. All nodes feature Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz with 20 CPU Cores with the following memory and additional hardware configurations:
- Legacy compute: 128GB Memory
- Legacy high memory compute: 512GB Memory
- Legacy GPU: 128GB RAM and 2 NVIDIA Tesla K80 per node with 24GB GDDR5 ECC memory
Our HPC uses the SLURM workload manager and has several queues, each tailored for a different purpose.
- cpu-standard - 12 Standard compute nodes.
- vhmem - 2 High memory compute nodes.
- gpu - 1 GPU Compute node.
- legacycompute – 40 Legacy compute nodes.
- legacyhmem – 2 Legacy high memory nodes.
- legacygpu – 5 Legacy GPU nodes.
During the onboarding process, you will be provided with applications which will enable:
- Terminal access via SSH.
- File transfer via SCP.
- Virtual GL - accelerated remote visualisation and analysis capabilities.