Choose language or region
Happyware DE - EN Sprachshop
Worldwide delivery & support
Up to 6 years warranty
On-site repair service

HPC Clusters - High-Performance Computing Clusters for High Demands

High Performance Computing Cluster Systems, or HPC clusters for short are typically found in technical and scientific environments such as universities, research institutions, and corporate research departments. Often the tasks performed there, such as weather forecasts or financial projections, are broken down into smaller subtasks in the HPC cluster and then distributed to the cluster nodes.

The cluster's internal network connection is important because the cluster nodes exchange a lot of small pieces of information. These have to be transported from one cluster node to another in the fastest way possible. This means, the latency of the network must be kept to a minimum. Another important aspect is that the required and generated data can be read or written by all cluster nodes of an HPC cluster at the same time.

Here you'll find HPC - High Performance Computing

Close filters
 
 
  •  
  •  
  •  
  •  
from to
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
11 From 11
No results were found for the filter!
long delivery time
SBS-820H-420P | Supermicro Single Xeon 20 Node Blade Server Supermicro SBS-820H-420P 20 High Performance DP

special highlight

8U Superblade Server System

Up to 20 High performance DP Blade and ready for PCI-E Gen 4

  • 8U Rackmount Server, up to 220W TDP
  • Dual Socket P+, Intel Xeon Scalable Processors 3rd Gen
  • Up to 4TB RAM 3DS ECC DDR4-3200MHz, 16x DIMM slots
  • 2x 2.5 Hot-Plug NVMe/SATA3 and 1x 2.5 Hot-plug SATA3 Bays
  • 1x 200G Mellanox IB HDR and 2x 25G Marvell onboard ethernet
  • Supports liquid cooling with up to 270W TDP
  • 8x 2200W Redundant Titanium Level Power Supplies
From €73,789.00 *
long delivery time
Details
Happyware Highlight
long delivery time
Supermicro AS-2124GQ-NART-LCC | 2U Dual AMD EPYC GPU Server Supermicro AS -2124GQ-NART-LCC GPU Server 4 GPUs

special highlight

Liquid Cooling GPU Server

  • 2U Rackmount Server, up to 280W TDP
  • Dual Socket SP3, AMD EPYC 7003 Series Processors
  • 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
  • 4x NVIDIA GPU cards
  • 4x 2.5 hot-swap SATA/NVMe/SAS drive bays
  • 4x PCI-E 4.0 x16 LP slots
  • 2x 2200W redundant power supplies (Platinum Level)
on request
Details
Happyware Highlight
long delivery time
Supermicro SYS-220HE-FTNR-US | Dual Xeon Hyper-E 2U Rack Server Supermicro SYS-220HE-FTNR-US Server Dual Xeon CPU

special highlight

Up to 4 GPUs in Edge Server

32 DIMMs, up to 8TB RAM

  • 2U Rackmount Server, up to 270W TDP
  • Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 32x DIMM slots, up to 12TB RAM DDR4-3200MHz
  • 6x 2.5 hot-swap NVMe/SATA drive bays
  • 4x PCI-E 4.0 x16 slots with GPU/Accelerator support
  • 6x heavy duty hot-swap fans
  • 2000W redundant AC Power Supplies with PMBus
on request
Details
Happyware Highlight
long delivery time
Supermicro SYS-420GP-TNAR+-US | Dual Xeon 4U GPU Server Supermicro SYS-420GP-TNAR+-US 4U Server X12 CPU

special highlight

8x HGX A100 GPU SXM4

NVIDIA® NVLink™ with NVSwitch™

 
  • 4U Rackmount Server, up to 270W TDP
  • Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 32x DIMM slots, up to 12TB RAM DDR4-3200MHz
  • 8x NVIDIA HGX A100 GPU and 6x NVIDIA NVSwitch
  • 6x 2.5 Inch hot-swap NVMe/SATA/SAS drive bays
  • 10x PCI-E 4.0 x16 LP slots
  • 4x 3000W redundant Power supplies (Titanium Level)
on request
Details
Happyware Highlight
long delivery time
Gigabyte G593-ZX1 | Dual AMD EPYC 5U Mainstream HPC/AI Server Gigabyte G593-ZX1 5U AI/HPC Server

special highlight

Supports AMD Instinct™ MI300X Accelerators

 
  • 5U Rackmount Server, up to 300W cTDP
  • Dual Socket SP5, AMD EPYC 9004 Series processors
  • 24x DIMM slots, up to 6TB RAM DDR4-4800MHz
  • 8x 2.5 hot-swap drive bays
  • 12x PCI-E 5.0 Expansion slots & 2x M.2
  • Support 8x AMD Instinct™ MI300X OAM GPUs, 2x 10G LAN
  • 6x 3000W redundant power supplies (Titanium Level)
coming soon
Details
Happyware Highlight
long delivery time
Gigabyte G593-ZX2 | Dual AMD EPYC 5U Mainstream HPC/AI Server Gigabyte G593-ZX2 5U AI/HPC Server

special highlight

Supports AMD Instinct™ MI300X Accelerators

  • 5U Rackmount Server, up to 300W cTDP
  • Dual Socket SP5, AMD EPYC 9004 Series processors
  • 24x DIMM slots, up to 6TB RAM DDR4-4800MHz
  • 8x 2.5 hot-swap drive bays
  • 12x PCI-E 5.0 Expansion slots & 2x M.2
  • Support 8x AMD Instinct™ MI300X OAM GPUs, 2x 10G LAN
  • 6x 3000W redundant power supplies (Titanium Level)
coming soon
Details
Happyware Highlight
long delivery time
Gigabyte G383-R80 | AMD Instinct MI300A APU 3U HPC/AI Server Gigabyte G383-R80 3U AI/HPC Server

special highlight

Integrated APU, CPU + GPU + Memory

  • 3U Rackmount Server, up to 550W TDP
  • Socket SH5, 4x AMD Instinct™ MI300A APUs
  • 128GB HBM3 unified memory per APU
  • 8x 2.5 hot-swap drive bays
  • 12x PCI-E 5.0 Expansion slots & 1x M.2
  • 4x GPUs, 2x 10G LAN
  • 4x 2200W redundant power supplies (Titanium Level)
coming soon
Details
Please contact our sales team
Do you need help?
Your contact person:
Alexander Hauschild
Sales
11 From 11

Do you need help?

Simply call us or use our inquiry form.

Structure of an High Performance Computing Cluster

  • The cluster server, which is called the master or frontend manages the access and provides programmes and the home data area.
  • The cluster nodes do the computing.
  • A TCP network is used to exchange information in the HPC cluster.
  • A high-performance network is required to enable data transmissions with very low latency.
  • The High Performance Storage (Parallel File System) enables the simultaneous write access of all cluster nodes.
  • The BMC Interface (IPMI Interface) is the access point for the administrator to manage the hardware.

All cluster nodes of an HPC cluster are always equipped with the same processor type of the selected manufacturer. Usually different manufacturers and types are not combined in one HPC cluster. A mixed configuration of main memory and other resources is also possible. However, this should be taken into account when configuring the job control software.

HPC Clusters

Where should HPC be used?

HPC clusters are most effective if they are used for computations that can be subdivided into different subtasks. An HPC cluster can also handle a number of smaller tasks in parallel. A high-performance computing cluster is also able to make a single application available to several users at the same time in order to save costs and time by working simultaneously.

Depending on your budget and requirements, HAPPYWARE will be happy to build an HPC cluster for you – with various configured cluster nodes, a high speed network, and a parallel file system. For the HPC cluster management, we rely on the well-known OpenHPC solution to ensure effective and intuitive cluster management.

If you would like to learn more about possible application scenarios for HPC clusters, our IT and HPC Cluster expert Jürgen Kabelitz will be happy to help you. He is the head of our cluster department and is available to answer your questions on +49 4181 23577 79.

HPC Cluster Solutions from HAPPYWARE

Below we have compiled a number of possible HPC cluster configurations for you:

Frontend or Master Server

  • 4U cluster server with 24 3.5'' drive bays
  • SSD for the operating system
  • Dual Port 10 Gb/s network adaptor
  • FDR Infiniband adaptor

Cluster nodes

  • 12 cluster nodes with dual CPU and 64 GB RAM
  • 12 cluster nodes with dual CPU and 256 GB RAM
  • 6 GPU computing systems, each with 4 Tesla V100 SXM2 and 512 GB RAM
  • FDR Infiniband and 10 GB/s TCP network

High-performance storage

  • 1 storage system with 32 x NF1 SSD with 16 TB capacity each
  • 2 storage systems with 45 hot-swap bays each
  • Network connection: 10 Gb/s TCP/IP and FDR Infiniband

HPC Cluster Management - with OpenHPC and xCAT
Managing HPC clusters and data necessitates powerful software. To meet this we offer two proven solutions with OpenHPC and xCAT.

HPC Cluster with OpenHPC
OpenHPC enables basic HPC cluster management based on Linux and OpenSource software.

Scope of services

  • Forwarding of system logs possible
  • Nagios Monitoring & Ganglia Monitoring - Open source solution for infrastructures and scalable system monitoring for HPC clusters and grids
  • ClusterShell event-based Python library for parallel execution of commands on the cluster
  • Genders - static cluster configuration database
  • ConMan - Serial Console Management
  • NHC - node health check
  • Developer software including Easy_Build, hwloc, spack, and valgrind
  • Compilers such as GNU Compiler, LLVM Compiler
  • MPI Stacks
  • Job control system such as PBS Professional or Slurm
  • Infiniband support & Omni-Path support for x86_64 architectures
  • BeeGFS support for mounting BeeGFS file systems
  • Lustre Client support for mounting Lustre file systems
  • GEOPM Global Extensible Power Manager
  • Support of INTEL Parallel Studio XE Software
  • Support of local software with the Modules software
  • Support of nodes with stateful or stateless configuration

Supported operating systems

  • CentOS7.5
  • SUSE Linux Enterprise Server 12 SP3

Supported hardware architectures

  • x86_64
  • aarch64

HPC cluster with xCAT xCAT is an "Extreme Cloud Administration Toolkit" and enables comprehensive HPC cluster management.

Suitable for the following applications

  • Clouds
  • Clusters
  • High-performance clusters
  • Grids
  • Data Centre
  • Renderfarms
  • Online Gaming Infrastructure
  • Or any other system configuration that is possible.

Scope of services

  • Detection of servers in the network
  • Running remote system management
  • Provisioning of operating systems on physical or virtual servers
  • Diskful (stateful) or diskless (stateless) installation
  • Installation and configuration of user software
  • Parallel system management
  • Cloud integration

Supported operating systems

  • RHEL
  • SLES
  • Ubuntu
  • Debian
  • CentOS
  • Fedora
  • Scientific Linux
  • Oracle Linux
  • Windows
  • Esxi
  • und etliche andere

Supported hardware architectures

  • IBM Power
  • IBM Power LE
  • x86_64

Supported virtualisation infrastructure

  • IBM PowerKVM
  • IBM zVM
  • ESXI
  • XEN

Performance values for potential processors
The performance values used were those of SPEC.ORG. Only the values for SPECrate 2017 Integer and SPECrate 2017 Floating Point are compared:

HerstellerModellProzessorTaktrate# CPUs# cores# ThreadsBase IntegerPeak IntegerBase FloatingpointPeak Floatingpoint
Giagbyte R181-791 AMD EPYC 7601 2,2 GHz 2 64 128 281 309 265 275
Supermicro 6029U-TR4 Xeon Silver 4110 2,1 GHz 2 16 32 74,1 78,8 87,2 84,8
Supermicro 6029U-TR4 Xeon Gold 5120 2.2 GHz 2 28 56 146 137 143 140
Supermicro 6029U-TR4 Xeon Gold 6140 2.3 GHz 2 36 72 203 192 186 183

NVIDIA Tesla V100 SXM2 64Bit 7.8 TFlops; 32Bit 15.7 TFlops; 125 TFlops for tensor operations.

HPC Clusters and more from HAPPYWARE - Your answer for powerful cluster solutions

HAPPYWARE is your specialist for individually configured and high-performance cluster solutions – whether it‘s GPU clusters, HPC clusters, or other setups. We would be happy to build a system that meets your company's needs at a competitive price.

We are able to offer special discounts for scientific and educational organisations. If you would like to know more about our discounts, please contact us.

If you would like to know more about the equipment of our HPC clusters or you need an individually designed cluster solution, please contact our cluster specialist Jürgen Kabelitz on +49 4181 23577 79. We will be happy to help.