HPC
RESOURCES


The priority for NIIF Institute is to keep their systems using the most advanced technologies available to serve the widest range of the scientific community. This can only be achieved by joining projects aiming to integrate or develop our systems.

The widest community can be reached with heterogeneus hardware portfolio. NIIF is operating a variety of different HPC technologies, often acting as the first one to adopt newest technologies.

This page provides an overview of HPC systems in operation, along with a brief description with pictures.

Please visit HPC wiki for most detailed description of equipment and technologies used.

Click on the header with the name of the machine to jump to the relevant description.

HPC Budapest cluster Budapest 2 - GPU cluster Szeged cluster with GPU Debrecen cluster Debrecen 2 (Leo) - GPU cluster Debrecen 3 (Apollo) - Phi cluster Pécs - UV machine Miskolc - UV machine
Type HP CP4000SL HP SL250s HP CP4000BL SGI ICE8400EX HP SL250s HP Apollo 8000 SGI UV 1000 SGI UV 2000
CPU cores 768 280 2400 1536 1344 1080 1152 352
Memory / node 66 GB 63 GB 132 GB 47 GB 125 GB 125 GB 6 TB 1.4 TB
GPU / Phi accelerators, MICs 28 pcs. Intel Xeon Phi - 12 pcs. Nvidia M2070 - 204 pcs. Nvidia K20x + 48 pcs. Nvidia K40x 88 pcs. Intel Xeon Phi - -
Linpack (Rmax) 5 Tflops 27 Tflops 20 Tflops 18 Tlops 254 Tflops 106 Tflops 10 Tflops 8 Tflops
Compute nodes 32 14 50 128 84 45 1 1
HPC Budapest cluster Budapest 2 - GPU cluster Szeged cluster with GPU Debrecen cluster
Type HP CP4000SL HP SL250s HP CP4000BL SGI ICE8400EX
CPU cores 768 280 2400 1536
Memory / node 66 GB 63 GB 132 GB 47 GB
GPU / Phi accelerators, MICs 28 db Intel Xeon Phi - 12 db Nvidia M2070 -
Linpack (Rmax) 5 Tflops 27 Tflops 20 Tflops 18 Tlops
Compute nodes 32 14 50 128

HPC Debrecen 2 (Leo) - GPU cluster Debrecen 3 (Apollo) - Phi cluster Pécs - UV machine Miskolc - UV machine
Type HP SL250s HP Apollo 8000 SGI UV 1000 SGI UV 2000
CPU cores 1344 1056 1152 352
Memory / node 125 GB 125 GB 6 TB 1.4 TB
GPU / Phi accelerators, MICs 204 db Nvidia K20x + 48 db Nvidia K40x 88 db Intel Xeon Phi - -
Linpack (Rmax) 254 Tflops 106 Tflops 10 Tflops 8 Tflops
Compute nodes 84 44 1 1

Debrecen 2 (Leo) - GPU cluster


The HPC is named after the Hungarian physicist Leó Szilárd. This is the most powerful listed supercomputer of the country, and is on the list of the top500 HPCs in the world. The HPC is located in the city of Debrecen.

The cluster have 1344 Sandy Bridge CPU cores, accelerated with 252 Nvidia K20x and K40x GPGPUs adding 3576 more (real) cores.

The machine has more than 10Tbytes of RAM.


Budapest cluster


A machine has 768 db Opteron CPU cores, and more than 2Tbytes of RAM.


Budapest 2 - Phi cluster


The cluster have 280 Sandy Bridge CPU core, accelerated by 28 Xeon Phi coprocessor, adding 1680 more cores to be available for computations.

The machine has more than 2Tbytes of RAM.


Debrecen cluster


The cluster have 1536 Intel Westmere CPU cores, and have more than 6Tbytes of RAM.


Debrecen 3 (Apollo) - Phi cluster


This HPC is an Apollo 8000 machine by HP, which represent the most advanced technology the vendor could provide to date, with novel cooling solution achieving great density of components.

The cluster have 1056 Sandy Bridge CPU core, accelerated by 90 Xeon Phi coprocessor, adding 5490 more cores to be available for computations.

The machine have nearly 6Tbytes of RAM.



Miskolc - UV machine


This is a UV 2000 machine of SGI, which provides similar technology to SMP architecture, called ccNUMA. The difference between UVs and clusters is that jobs can use all cpus and full memory capacity for the same computation on the same OS on UVs.

The machine has 352 Sandy Bridge CPU and 1.4Tbytes of RAM (which can be fully utiized by a single job, if needed).


Szeged (SEGED) cluster with GPU


The machine has 2400 Opteron CPU cores, two nodes accelerated by 12 Nvidia M2070 GPGPU.

The computer has 6Tbytes of RAM.


Pécs - UV machine


This is a UV 1000 machine of SGI, which provides similar technology to SMP architecture, called ccNUMA. The difference between UVs and clusters is that jobs can use all cpus and full memory capacity for the same computation on the same OS on UVs.

The machine has 1152 Nehalem CPU and 6Tbytes of RAM (which can be fully utiized by a single job, if needed).