- 1U Rackmount Server, up to 280W TDP
- Single Socket sTRX4, 3rd Gen AMD Ryzen Threadripper Processor
- 8x DIMM-Steckplätze, up to 256GB RAM DDR4-3200MHz
- 2x 2.5 Inch hot-swap drive bays
- 2x 10GbE RJ45 LAN ports
- 2x PCI-E 4.0 x16 Expansion slots for GPU cards, 1x M.2
- 2x 1600W redundant power supplies (Platinum Level)
special highlight
up to 1024 cores on only 2U
- 2U Rackmount Server, up to 128 Cores
- Dual Ampere Altra Max CPU, 4 Nodes
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 6x 2.5 SATA hot-swap drive bays
- 2x PCI-E 4.0 x16 Expansion slots, 1x OCP 3.0
- 1x M.2 PCI-E 4.0 x4 slots
- 2x 2200W Redundant Power Supplies (Platinum Level)
- 2U Rack Server, up to 270W CPU TDP
- Dual Socket P+, Intel Xeon Scalable processor 3rd Gen
- 20x DIMM slots (16 DRAM + 4 PMem), up to 6TB RAM DDR4-3200MHz
- 12x 2.5 hot-swap NVMe/SATA Bays (12x 2.5 NVMe hybrid)
- 1x RJ45 Dedicated BMC LAN port
- 1x PCI-E 4.0 x16 and 2 PCI-E 4.0 x8 slots (LP)
- 2x 2200W Redundant Power supplies (Titanium Level)
special highlight
24x 3.5" drive bays
- 2U Rackmount Server, up to 205W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 24x 3.5 SATA3/SAS3 hot-swap drive bays
- 3x PCI-E 4.0 x16 Expansion slots (2x LP & 1x AIOM)
- 5x heavy duty 8cm fans
- 2x 1600W redundant power supplies (Titanium Level)
- 1U Rack Server, up to 185W TDP
- Dual Socket P+, Intel Xeon Scalable 3rd Gen.
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 4x 2.5 Hot-swap SATA Drive bays
- 2x PCI-E 4.0 x16 LP slots
- 6x heavy-duty fans
- 1000W Redundant Power supplies Titanium Level
special highlight
1U 10-Bay Gen4 NVMe
- 1U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 10x 2.5 hot-swap SATA/SAS/NVMe drive bays
- 2x PCI-E 4.0 x16 Expansion slots and 2x OCP Mezzanine slots
- 2x 1Gb/s LAN ports
- 2x 1300W Redundant Power Supplies (Platinum Level)
- 2U Rack Server, up to 270W TDP
- Dual Socket P+, 3rd Gen Intel® Xeon® Scalable processors
- 20x DIMM slots (16 DRAM + 4 PMem), up to 6TB RAM DDR4-3200MHz
- 12x 2.5 hot-swap NVMe/SATA/SAS Drive bays (12x 2.5 NVMe hybrid)
- 1x PCI-E 4.0 x16 and 1x PCI-E 4.0 x8 (LP) slots
- Network connectivity via AIOM (OCP 3.0 compliant)
- 2x 2200W redundante Power supplies (Titanium Level)
- 2U Rackmount Server, up to 280W cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processor
- 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
- 6x 2.5 SATA & 2x 2.5 NVMe/SATA hot-swap drive bays
- 8x PCI-E Gen3 slots for GPU, 2x PCI-E Gen4 LP slots
- 2x 10Gb/s SFP+ LAN ports
- 2x 2200W redundant Power supplies (Platinum Level)
- 2U Rackmount Server, up to 140W TDP
- Dual Socket P, 2nd Gen Intel Xeon Scalable Processors
- 4x hot-pluggable Nodes
- 8x DIMM slots, up to 2TB RAM DDR4-2933MHz ECC
- 3x 3.5 hot-swap drive bays, 2x SATA DOM
- 1x PCI-E 3.0 x16 slot
- 2x 1600W redundant power supplies (Titanium Level 96%)
special highlight
16 DIMMs, up to 4TB RAM
- 1U Rackmount Server, up to 280W TDP
- Single Socket SP3, AMD EPYC 7003 Series Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 10x 2.5 Inch hot-swap SATA3/SAS3/NVMe drive bays
- 2x PCI-E 4.0 x16 slots and 2x PCI-E 4.00 x16 AIOM
- Networking provided via AIOM
- 860W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 240W TDP
- Dual Socket SP3, AMD EPYC 7003 Series Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 12x 3.5 Inch hot-swap SATA/NVMe drive bays
- 3x PCI-E 4.0 x16 slots and 3x PCI-E 4.0 x8 slots
- 3x heavy duty cooling fans, 1x Air shroud
- 920W redundant power supply (Platinum Level)
- 2U Rackmount Server, up to 140W TDP
- 2x Hot-pluggable nodes
- Dual Socket P, 2nd Gen Intel Xeon Scalable Processors
- 8x DIMM slots, up to 2TB RAM DDR4-2933MHz ECC
- 6x 3.5 Hot-swap SATA3 drive bays
- 2x PCI-E 3.0 x8 slots
- 2x 1200W redundant power supplies (Titanium Level)
special highlight
14 Server blades on 3U
- 3HE Rack MicroBlade Enclosure
- Up to 14 Hot-swap server blades
- Up to 2 Hot-swap 10G ethernet switches
- 1 Hot-swap management module optional
- 4x Huge cooling fans
- 2000W Redundant power supplies
- 1U Rack Server, 145W TDP
- Dual Intel Xeon Scalable CPU, 2nd Gen.
- Up to 4TB RAM, DDR4-2933MHz ECC
- Network support via SIOM
- 1000W Redundant power supplies (Titanium Level)
special highlight
Supports GRAID SupremeRAID
- 2U Rackmount Server, up to 240W cTDP
- Dual Socket SP3, AMD EPYC 7003 Series Processor
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 4x NVMe + 8x SAS/SATA, hot-swap drive bays
- 4x PCI-E 4.0 x16 Expansion slots & 2x OCP
- 3x GPUs & 2x 1Gb/s LAN ports via Intel® I350-AM2
- 2x 2000W redundant power supplies (Platinum Level)
- 4U Rack SuperBlade Enclosure
- Up to 14 blade servers
- Up to 2x 10GbE switches
- 1 Management module
- 4x 2200W Power supplies (Titanium Level)
special highlight
4x M.2 per Node
- 2U Rackmount Server, up to 280W TDP/cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processors
- 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
- 3x 3.5 hot-swap SATA3 drive bays
- 2x PCI-E 4.0 x16 LP slots
- 4x heavy-duty 8cm PWM fans
- 2000W redundant Power Supplies with PMBus (Titanium Level)
special highlight
2 Nodes in 1U
- 1U Rackmount Server, up to 185W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 2x hot-pluggable Nodes
- 4x 2.5 Inch SATA/SAS hot-swap drive bays
- 1x PCI-E 4.0 x16 LP Expansion-Slot
- 2x 1000W redundant power supplies (Titanium Level)
special highlight
Multi-Node GPU Server
Up to 3 PCI-E GPUs per Node
- 2U Rackmount Server, up to 270W TDP
- Single Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
- 2x 2.5 Inch hot-swap NVMe drive bays
- 3x PCI-E 4.0 x16 FHFL DW slots
- 2x M.2 slots and AST2500 BMC
- 2x 2600W redundant power supplies (Titanium Level)
special highlight
Up to 2 Add-on cards on 2U
- 2U Rackmount Server, up to 280W TDP
- Single Socket SP3, AMD EPYC 7003 Series Processors
- 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
- 2x 2.5 hot-swap U.2 NVMe drive bays
- 6x PCI-E 4.0 x16 slots and 1x PCI-E AIOM
- Integrated IPMI 2.0 + KVM with dedicated LAN
- 2600W redundant power supplies with PMBus
special highlight
Integrated SAS HBA
- 2U Rackmount Server, up to 205W TDP
- Dual Socket P, 2nd Gen Intel Xeon Scalable Processors
- 24x DIMM slots, up to 6TB RAM DDR4-2933MHz ECC
- 2x hot-pluggable nodes
- 6x 3.5 hot-swap drive bays
- 2x PCI-E 3.0 x8, 1x PCI-E 3.0 x16 & 1x SIOM card support
- 2x 2200W redundant power supplies (Titanium Level)
- 8U Rack SuperBlade Enclosure
- Up to 20 blade servers
- Up to 2x 10GbE switches
- 1 Management module
- 4x 2200W Power supplies (Titanium Level)
- 2U Rackmount Server, up to 165W TDP
- Dual Socket P, 2nd Gen Intel Xeon Scalable Processors
- 16x DIMM slots, up to 4TB RAM DDR4-2933MHz ECC
- 4x hot-pluggable nodes
- 3x 3.5 hot-swap SATA drive bays
- 2x PCI-E 3.0 x16 slots and 1x SIOM card support
- 2x 2200W redundant power supplies (Titanium Level)
- 6U Rackmount SuperBlade Enclosure
- Up to 14/28 Blade Servers
- Up to 2x Ethernet Switch Modules
- 1x Management module
- Up to 8x cooling fans
- Up to 8x 2200W hot-plug Power supplies (Titanium Level)
Do you need help?
Simply call us or use our inquiry form.
Structure of an HPC Cluster
- The cluster server, which is also called the master or frontend, manages the access and provides programmes and the home data area.
- The cluster nodes do the computations.
- A TCP network is used to exchange information in the HPC cluster.
- A high-performance network is required to enable data transmission with very low latency.
- The High-Performance Storage (Parallel File System) enables the simultaneous write access of all cluster nodes.
- The BMC Interface (IPMI Interface) is the access point for the administrator to manage the hardware.
All cluster nodes of an HPC cluster are always equipped with the same processor type of a selected manufacturer. Usually, different manufacturers and types are not combined in one HPC cluster. A mixed configuration of main memory and other resources is possible. This needs to be considered when configuring the job control software.
When are HPC clusters used?
HPC clusters are most effective if they are used for computations that can be subdivided into different subtasks. An HPC cluster can also handle a number of smaller tasks in parallel. A high-performance computing cluster is also able to make a single application available to several users at the same time in order to save costs and time by working simultaneously.
HAPPYWARE will be happy to build an HPC cluster for you with various configured cluster nodes, a high-speed network, and a parallel file system. For the HPC cluster management, we rely on the well-known OpenHPC solution to ensure effective and intuitive cluster management.
If you would like to learn more about possible application scenarios for HPC clusters, our HPC expert Jürgen Kabelitz will be happy to help you.
HPC Cluster Solutions from HAPPYWARE
Below we have compiled a number of possible HPC cluster configurations for you:
Frontend or Master Server
- 4U Cluster Server with 24 3.5'' drive bays
- SSD for the operating system
- Dual Port 10 Gb/s network adaptor
- FDR Infiniband adaptor
Cluster nodes
- 12 Cluster nodes with dual CPU and 64 GB RAM
- 12 Cluster nodes with dual CPU and 256 GB RAM
- 6 GPU computing systems, each with 4 Tesla V100 SXM2 and 512 GB RAM
- FDR Infiniband and 10 GB/s TCP network
High-performance storage
- 1 storage system with 32 x NF1 SSD with 16 TB capacity each
- 2 storage systems with 45 hot-swap bays each
- Network connection: 10 Gb/s TCP/IP and FDR Infiniband
HPC Cluster Management - with OpenHPC and xCAT
Managing HPC clusters and data necessitates powerful software. To cater for this we offer two proven solutions with OpenHPC and xCAT.
HPC Cluster with OpenHPC
OpenHPC enables basic HPC cluster management based on Linux and OpenSource software.
Scope of services
- Forwarding of system logs possible
- Nagios Monitoring & Ganglia Monitoring - Open source solution for infrastructures and scalable system monitoring for HPC clusters and grids
- ClusterShell - Event-based Python library for parallel execution of commands on the cluster
- Genders - Static cluster configuration database
- ConMan - Serial Console Management
- NHC - Node health check
- Developer software including Easy_Build, hwloc, spack, and Valgrind
- Compilers such as GNU Compiler, LLVM Compiler
- MPI Stacks
- Job control systems such as PBS Professional or Slurm
- Infiniband support & Omni-Path support for x86_64 architectures
- BeeGFS support for mounting BeeGFS file systems
- Lustre Client support for mounting Lustre file systems
- GEOPM Global Extensible Power Manager
- Support of INTEL Parallel Studio XE Software
- Support of local software with the Modules software
- Support of nodes with stateful or stateless configuration
Supported operating systems
- CentOS7.5
- SUSE Linux Enterprise Server 12 SP3
Supported hardware architectures
- x86_64
- aarch64
HPC Cluster with xCAT
xCAT is an "Extreme Cloud Administration Toolkit" and enables comprehensive HPC cluster management.
Suitable for the following applications
- Clouds
- Clusters
- High-performance clusters
- Grids
- Data centres
- Render farms
- Online Gaming Infrastructure
- Or any other system configuration that is possible.
Scope of services
- Detection of servers in the network
- Running remote system management
- Provisioning of operating systems on physical or virtual servers
- Diskful (stateful) or diskless (stateless) installation
- Installation and configuration of user software
- Parallel system management
- Cloud integration
Supported operating systems
- RHEL
- SLES
- Ubuntu
- Debian
- CentOS
- Fedora
- Scientific Linux
- Oracle Linux
- Windows
- Esxi
- And many more
Supported hardware architectures
- IBM Power
- IBM Power LE
- x86_64
Supported virtualisation infrastructure
- IBM PowerKVM
- IBM zVM
- ESXI
- XEN
Performance values for potential processors
The performance values used were those of SPEC.ORG. Only the values for SPECrate 2017 Integer and SPECrate 2017 Floating Point will be compared:
Manufacturer | Model | Processor | Clockrate | # CPUs | # cores | # Threads | Base Integer | Peak Integer | Base Floating-point | Peak Floating-point |
Gigabyte | R181-791 | AMD EPYC 7601 | 2,2 GHz | 2 | 64 | 128 | 281 | 309 | 265 | 275 |
Supermicro | 6029U-TR4 | Xeon Silver 4110 | 2,1 GHz | 2 | 16 | 32 | 74,1 | 78,8 | 87,2 | 84,8 |
Supermicro | 6029U-TR4 | Xeon Gold 5120 | 2.2 GHz | 2 | 28 | 56 | 146 | 137 | 143 | 140 |
Supermicro | 6029U-TR4 | Xeon Gold 6140 | 2,3 GHz | 2 | 36 | 72 | 203 | 192 | 186 | 183 |
NVIDIA Tesla V100 SXM2 64Bit 7.8 TFlops; 32Bit 15.7 TFlops; 125 TFlops for tensor operations.
HPC Clusters and more from HAPPYWARE - Your partner for powerful cluster solutions
We are your specialist for individually configured and high-performance clusters — whether it‘s GPU clusters, HPC clusters, or other setups involved. We would be happy to build a system to meet your company's needs at a competitive price.
For scientific and educational organisations, we offer special discounts for research and teaching. Please contact us to learn more about these offers.
If you would like to know more about the equipment of our HPC clusters or you need an individually designed cluster solution, please contact our cluster specialist Jürgen Kabelitz. He will be happy to help.