HPC Clusters - High-Performance Computing Clusters for High Demands
High Performance Computing Cluster Systems, or HPC clusters for short are typically found in technical and scientific environments such as universities, research institutions, and corporate research departments. Often the tasks performed there, such as weather forecasts or financial projections, are broken down into smaller subtasks in the HPC cluster and then distributed to the cluster nodes.
The cluster's internal network connection is important because the cluster nodes exchange a lot of small pieces of information. These have to be transported from one cluster node to another in the fastest way possible. This means, the latency of the network must be kept to a minimum. Another important aspect is that the required and generated data can be read or written by all cluster nodes of an HPC cluster at the same time.
High Performance Computers
Configure and buy high-performance computers for HPC, e.g. for parallel computing with many cores per CPU!
High Performance Server or HPC Workstations for individual purposes for your own computer cluster.
High Performance Storage
Buy HPC storage for your especially important data, e.g. as All Flash Storage with NVMe SSDs.
Storage for stable and scalable file systems e.g. LustreFS, GlusterFS or GFS Red Hat. Our HPC Engineers are happy to help you.
High Performance Networks
We offer high performance network equipment based on:
Buy High Performance Networking (HPN) Equipment online or let our HPC Engineers advise you!
Cluster Computing
Buy Server Cluster or Cluster Nodes for your High performance Cluster consisting of Server Hardware and Cluster Software
We deliver turnkey Cluster solutions including management software or complete Failover Cluster or just the Hardware for your Hadoop cluster or
GPU Computing
High performance GPU solutions
We offer GPU server systems, GPU workstations and HPC clusters with NVidia Tesla GPUs.
Here you'll find HPC - High Performance Computing
- 2U Rackmount Server, up to 60 Cores
- Single Socket E, 4th Gen Intel Xeon Scalable CPU
- 16x DIMM slots, up to 4TB RAM DDR5-4800MHz
- 12x 3.5/2.5 Inch hot-swap drive bays
- 1x 1GbE RJ45 LAN port
- 3x PCI-E 5.0 x16 Expansion slots, 2x OCP 3.0
- 2x 2400W redundant power supplies (Platinum Level)
special highlight
dust protected Fanless GPU Workstation
- Fanless GPU Workstation, 16 Cores/24 Threads
- Single Socket V (LGA-1700), Intel Core i7 13700T Processor
- 32GB RAM
- 3TB Storage (1TB for SSD 1, 2TB for SSD 2)
- 2x RJ45 (1x 1G & 1x 2.5G) LAN
- 2x HDMI, 2x Displayport, 4x USB 3.2 gen2 + 2x USB 3.0, 2x Audio
- fanless RTX A2000
special highlight
Up to 8TB RAM
- 2U Rackmount Server, up to 280W cTDP
- Dual Socket SP3, AMD EPYC 7003 Series Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 26x 2.5 Inch hot-swap drive bays
- 2x 10GbE LAN ports and 1x IPMI
- 4x PCI-E Gen4 x16 slots and 1x PCI-E 4.0 x8 slot
- 2x 1600W Redundant Power Supplies (Platinum Level)
special highlight
1x Ampere Altra M128-30 included
- 2U Rackmount Server, bis zu 128 Kerne
- Single Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 4x 2.5 Inch NVMe hot-swap drive bays
- 5x PCI-E 4.0 x16 slots, 1x PCI-E Gen4 AIOM
- 1x Ultra-Fast M.2
- 2x 1600W redundant Power supplies (Titanium Level)
special highlight
2x Ampere Altra M128-30 included
- 2U Rackmount Server, up to 128 Cores
- Dual Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
- 32x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 28x 2.5 Inch hot-swap drive bays
- 4x PCI-E 4.0 x16 slots, 2x OCP 3.0 Gen4 x16
- 1x Ultra-Fast M.2 slot
- 2x 1600W Power Supplies (Platinum Level)
- 1U Rackmount Server, up to 280W TDP
- Single Socket sTRX4, 3rd Gen AMD Ryzen Threadripper Processor
- 8x DIMM-Steckplätze, up to 256GB RAM DDR4-3200MHz
- 2x 2.5 Inch hot-swap drive bays
- 2x 10GbE RJ45 LAN ports
- 2x PCI-E 4.0 x16 Expansion slots for GPU cards, 1x M.2
- 2x 1600W redundant power supplies (Platinum Level)
- 4U Rackmount Server, up to 300W cTDP
- Dual Socket SP5, AMD EPYC™ 9004 Series CPU
- 48x DIMM slots, up to 12TB RAM DDR5-4800MHz
- 12x 2.5 Inch hot-swap drive bays
- 2x 1GbE RJ45 LAN ports
- 18x PCI-E Gen5 Expansion slots (8x FHFL GPUs, 10x LP)
- 4x 3000W redundant power supplies (Titanium Level)
- 2U Rackmount Server, up to 128 Cores
- Single Socket LGA-4926, Ampere Altra Processor
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 8x 2.5 Inch U.2 hot-swap drive bays
- 4x PCI-E Gen4 Expansion slots and 1x OCP 2.0
- 2x 1GbE LAN ports via Intel® I350-AM2
- 2x 1300W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 225W TDP
- Dual Socket SP3, AMD EPYC 7003 CPU Series
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 12x 3.5 and 2x 2.5 hot-swap SAS/SATA drive bays
- 8x PCI-E Gen4 x16/x8 and 2x OCP Mezzanine slots
- 2x 1Gb/s LAN ports via Intel® I350-AM2
- 2x 1200W redundant power supplies (Platinum Level)
- 1U Rackmount Server, up to 270W TDP
- Single Socket P+, 3rd Gen Intel Xeon Scalable Processor
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 2x 2.5 Inch hot-swap drive bays
- 3x PCI-E expansion slots (1x FHFL, 2x LP), 1x OCP 3.0
- 3x Ultra-Fast M.2 with PCI-E Gen4/3 x4 Bandwidth
- 2x 800W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 12x 3.5 SATA/SAS/NVMe hot-swap drive bays
- 8x PCI-E 4.0 x16 Expansion slots and 2x OCP
- 2x 1GbE LAN ports via Intel I350-AM2
- 2x 1600W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 80 Cores
- Single Ampere Altra CPU, 80 Arm 3 Ghz v8.2+
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 4x 3.5 SATA/SAS hot-swap drive bays
- 4x PCI-E 4.0 x16 for GPU cards, 3x PCI-E 4.0 x8 LP slots
- 2x Ultra-Fast M.2 with PCI-E 4.0 x4 interface
- 2x 1600W redundant Power Supplies (Platinum Level)
- 4U Rackmount Server, up to 300W cTDP
- Dual Socket SP5, AMD EPYC 9004 Series CPU
- 48x DIMM slots, up to 12TB RAM DDR5-4800MHz
- 12x 2.5 Inch hot-swap drive bays
- 2x 1GbE RJ45 LAN ports
- 12x PCI-E Gen5 Expansion slots (10x FHFL GPUs, 2x LP)
- 4x 3000W redundant power supplies (Titanium Level)
- 2U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 8x 3.5 Inch hot-swap drive bays
- up to 10Gbit, 2x RJ45 LAN ports
- 6x PCI-E low-profile expansion slots
- 2x 1000W Redundant Power Supplies (Titanium Level)
- 1U Rackmount Server, up to 95W TDP
- Single Socket H5, Intel Xeon E-2300/10th Gen Pentium CPU
- 4x DIMM slots, up to 128GB RAM DDR4-3200MHz
- 4x 3.5 SATA drive bays and 2x 2.5 peripheral
- 1x PCI-E 4.0 x16, 1x PCI-E 4.0 x8 & 1x Internal HBA slots
- 2x 1GbE RJ45 Intel® Ethernet Controller i210
- 2x 400W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable CPU
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 26x 2.5 hot-swap drive bays
- 8x PCI-E Gen4 x16/x8 Expansion slots and 2x OCP
- Supports Dual ROM technology, Intel C621A Express Chipset
- 2x 1600W redundant power supplies (Platinum Level)
special highlight
up to 1024 cores on only 2U
- 2U Rackmount Server, up to 128 Cores
- Dual Ampere Altra Max CPU, 4 Nodes
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 6x 2.5 SATA hot-swap drive bays
- 2x PCI-E 4.0 x16 Expansion slots, 1x OCP 3.0
- 1x M.2 PCI-E 4.0 x4 slots
- 2x 2200W Redundant Power Supplies (Platinum Level)
- 2U Rackmount Server, up to 225W cTDP
- Dual Socket E, 5th/4th Gen Intel Xeon Scalable CPU
- 24x DIMM slots, up to 6TB RAM DDR5-5600MHz
- 8x 2.5 Inch hot-swap drive bays
- 2x 10GbE RJ45 LAN ports
- 10x PCI-E Gen5 Expansion slots (8x FHFL GPUs & 2x LP)
- 2x 3000W redundant power supplies (Titanium Level)
- 4U Rackmount Server, up to 385W cTDP
- Dual Socket E, 5th/4th Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR5-4400MHz
- 16x 2.5 hot-swap drive bays
- 13x PCI-E 5.0 x16 Expansion slots
- 2x 10GbE RJ45 LAN ports
- 4x 2700W redundant power supplies (Titanium Level)
- Mid-Tower Workstation, up to 235W cTDP
- Single Socket V, 14th Gen Intel Core-i Processors
- 4x DIMM slots, up to 128GB RAM DDR5-4400MHz
- 6x 3.5 fixed SATA Drive bays
- 2x PCIe 5.0 x8 + 2x PCIe 3.0 x1 Expansion slots
- 1x 1GbE + 1x 10GbE RJ45 LAN ports
- 1x 750W Single Power Supply (Gold Level)
special highlight
8 GPUs on 2U
- 2U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 24x DIMM slots, up to 6TB RAM DDR4-3200MHz
- 8x 2.5 Inch hot-swap drive bays
- 8x PCI-E 4.0 x16 Expansion slots for GPU and 2x LP
- 2x 10Gb/s Base-T LAN ports
- 2x 3200W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 300W cTDP
- Single Socket SP5, AMD EPYC 9004 Series CPU
- 12x DIMM slots, up to 3TB RAM DDR5-4800MHz
- 8x 2.5 Inch hot-swap drive bays
- 2x 10GbE RJ45 LAN ports
- 10x PCI-E Gen4/Gen5 Expansion Slots (8x FHFL GPUs & 2x LP)
- 2x 3000W redundant power supplies (Titanium Level)
special highlight
with NVIDIA HGX A100 8-GPU
- 4U Rackmount Server, up to 270W CPU TDP
- Dual Socket P+ Intel Xeon Scalable Processors 3rd Gen.
- 32x DIMM slots, up to 12TB RAM DDR4 3200MHz
- 6x 2.5 Hot-swap NVME/SATA/SAS Drive bays
- 10x PCI-E Gen 4.0 X16 LP slots
- 4x Heavy duty fans with optimal fan speed control
- 4x 3000W Redundant Titanium Level Power Supplies
- 2U Rackmount Server, up to 240W cTDP
- Dual Socket SP3, AMD EPYC 7003 Series processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 8x 2.5 hot-swap drive bays
- 2x 10GbE RJ45 LAN ports
- 10x PCI-E 4.0 x16 Expansion slots (8x GPUs & 2x LP)
- 2x 2200W redundant power supplies (Platinum Level)
Do you need help?
Simply call us or use our inquiry form.
Structure of an High Performance Computing Cluster
- The cluster server, which is called the master or frontend manages the access and provides programmes and the home data area.
- The cluster nodes do the computing.
- A TCP network is used to exchange information in the HPC cluster.
- A high-performance network is required to enable data transmissions with very low latency.
- The High Performance Storage (Parallel File System) enables the simultaneous write access of all cluster nodes.
- The BMC Interface (IPMI Interface) is the access point for the administrator to manage the hardware.
All cluster nodes of an HPC cluster are always equipped with the same processor type of the selected manufacturer. Usually different manufacturers and types are not combined in one HPC cluster. A mixed configuration of main memory and other resources is also possible. However, this should be taken into account when configuring the job control software.
Where should HPC be used?
HPC clusters are most effective if they are used for computations that can be subdivided into different subtasks. An HPC cluster can also handle a number of smaller tasks in parallel. A high-performance computing cluster is also able to make a single application available to several users at the same time in order to save costs and time by working simultaneously.
Depending on your budget and requirements, HAPPYWARE will be happy to build an HPC cluster for you – with various configured cluster nodes, a high speed network, and a parallel file system. For the HPC cluster management, we rely on the well-known OpenHPC solution to ensure effective and intuitive cluster management.
If you would like to learn more about possible application scenarios for HPC clusters, our IT and HPC Cluster expert Jürgen Kabelitz will be happy to help you. He is the head of our cluster department and is available to answer your questions on +49 4181 23577 79.
HPC Cluster Solutions from HAPPYWARE
Below we have compiled a number of possible HPC cluster configurations for you:
Frontend or Master Server
- 4U cluster server with 24 3.5'' drive bays
- SSD for the operating system
- Dual Port 10 Gb/s network adaptor
- FDR Infiniband adaptor
Cluster nodes
- 12 cluster nodes with dual CPU and 64 GB RAM
- 12 cluster nodes with dual CPU and 256 GB RAM
- 6 GPU computing systems, each with 4 Tesla V100 SXM2 and 512 GB RAM
- FDR Infiniband and 10 GB/s TCP network
High-performance storage
- 1 storage system with 32 x NF1 SSD with 16 TB capacity each
- 2 storage systems with 45 hot-swap bays each
- Network connection: 10 Gb/s TCP/IP and FDR Infiniband
HPC Cluster Management - with OpenHPC and xCAT
Managing HPC clusters and data necessitates powerful software. To meet this we offer two proven solutions with OpenHPC and xCAT.
HPC Cluster with OpenHPC
OpenHPC enables basic HPC cluster management based on Linux and OpenSource software.
Scope of services
- Forwarding of system logs possible
- Nagios Monitoring & Ganglia Monitoring - Open source solution for infrastructures and scalable system monitoring for HPC clusters and grids
- ClusterShell event-based Python library for parallel execution of commands on the cluster
- Genders - static cluster configuration database
- ConMan - Serial Console Management
- NHC - node health check
- Developer software including Easy_Build, hwloc, spack, and valgrind
- Compilers such as GNU Compiler, LLVM Compiler
- MPI Stacks
- Job control system such as PBS Professional or Slurm
- Infiniband support & Omni-Path support for x86_64 architectures
- BeeGFS support for mounting BeeGFS file systems
- Lustre Client support for mounting Lustre file systems
- GEOPM Global Extensible Power Manager
- Support of INTEL Parallel Studio XE Software
- Support of local software with the Modules software
- Support of nodes with stateful or stateless configuration
Supported operating systems
- CentOS7.5
- SUSE Linux Enterprise Server 12 SP3
Supported hardware architectures
- x86_64
- aarch64
HPC cluster with xCAT xCAT is an "Extreme Cloud Administration Toolkit" and enables comprehensive HPC cluster management.
Suitable for the following applications
- Clouds
- Clusters
- High-performance clusters
- Grids
- Data Centre
- Renderfarms
- Online Gaming Infrastructure
- Or any other system configuration that is possible.
Scope of services
- Detection of servers in the network
- Running remote system management
- Provisioning of operating systems on physical or virtual servers
- Diskful (stateful) or diskless (stateless) installation
- Installation and configuration of user software
- Parallel system management
- Cloud integration
Supported operating systems
- RHEL
- SLES
- Ubuntu
- Debian
- CentOS
- Fedora
- Scientific Linux
- Oracle Linux
- Windows
- Esxi
- und etliche andere
Supported hardware architectures
- IBM Power
- IBM Power LE
- x86_64
Supported virtualisation infrastructure
- IBM PowerKVM
- IBM zVM
- ESXI
- XEN
Performance values for potential processors
The performance values used were those of SPEC.ORG. Only the values for SPECrate 2017 Integer and SPECrate 2017 Floating Point are compared:
Hersteller | Modell | Prozessor | Taktrate | # CPUs | # cores | # Threads | Base Integer | Peak Integer | Base Floatingpoint | Peak Floatingpoint |
---|---|---|---|---|---|---|---|---|---|---|
Giagbyte | R181-791 | AMD EPYC 7601 | 2,2 GHz | 2 | 64 | 128 | 281 | 309 | 265 | 275 |
Supermicro | 6029U-TR4 | Xeon Silver 4110 | 2,1 GHz | 2 | 16 | 32 | 74,1 | 78,8 | 87,2 | 84,8 |
Supermicro | 6029U-TR4 | Xeon Gold 5120 | 2.2 GHz | 2 | 28 | 56 | 146 | 137 | 143 | 140 |
Supermicro | 6029U-TR4 | Xeon Gold 6140 | 2.3 GHz | 2 | 36 | 72 | 203 | 192 | 186 | 183 |
NVIDIA Tesla V100 SXM2 64Bit 7.8 TFlops; 32Bit 15.7 TFlops; 125 TFlops for tensor operations.
HPC Clusters and more from HAPPYWARE - Your answer for powerful cluster solutions
HAPPYWARE is your specialist for individually configured and high-performance cluster solutions – whether it‘s GPU clusters, HPC clusters, or other setups. We would be happy to build a system that meets your company's needs at a competitive price.
We are able to offer special discounts for scientific and educational organisations. If you would like to know more about our discounts, please contact us.
If you would like to know more about the equipment of our HPC clusters or you need an individually designed cluster solution, please contact our cluster specialist Jürgen Kabelitz on +49 4181 23577 79. We will be happy to help.