HUA High Performance GPU Cluster
HUA DIT HPC Cluster
This one was a nerve-racking, nail-bitting, Star Wars-inspired and zero-stress (jk) wonder of a project. Along with Christos Diou and Angelos Charalambidis, we setup and configured an on-premise High Performance GPU Cluster for HUA.
HUA’s HPC cluster utilizes the Slurm resource management and task scheduling system for managing the departmental GPU cluster. The GPU cluster is a pool of NVIDIA GPUs for CUDA-optimized deep/machine learning frameworks such as PyTorch and Tensorflow.
If you are affiliated with HUA check out the HPC Cluster Guide.