Choose language or region
Happyware DE - EN Sprachshop
Worldwide delivery & support
Up to 6 years warranty
On-site repair service

High-Performance Networks

If a company or institution uses high-performance computers, this poses a challenge for the local networks. This is because the high-performance network is crucial for communication between high-performance computer applications and for data transport. In order for the networks to cope with the data throughput of the computers, they need to meet special requirements. HAPPYWARE can provide these networks for you.

A powerful infrastructure for computers and clusters — with our high-performance networks, the full bandwidth of your network is always available to you for all HPC work processes.

One of the important factors for communication between high-performance computers is low latency and fast transmission of the 2 byte to 1 MB messages. Since the programs in an HPC cluster send and receive messages via several high-performance computers at the same time, it becomes clear that the network architectures need to be tailored to this problem.

If a company or institution uses high-performance computers, this poses a challenge for the local networks. This is because the high-performance network is crucial for communication between... read more »
Close window
High-Performance Networks

If a company or institution uses high-performance computers, this poses a challenge for the local networks. This is because the high-performance network is crucial for communication between high-performance computer applications and for data transport. In order for the networks to cope with the data throughput of the computers, they need to meet special requirements. HAPPYWARE can provide these networks for you.

A powerful infrastructure for computers and clusters — with our high-performance networks, the full bandwidth of your network is always available to you for all HPC work processes.

One of the important factors for communication between high-performance computers is low latency and fast transmission of the 2 byte to 1 MB messages. Since the programs in an HPC cluster send and receive messages via several high-performance computers at the same time, it becomes clear that the network architectures need to be tailored to this problem.

Here you'll find High Performance Networks

Close filters
 
 
  •  
  •  
from to
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
2 From 2
No results were found for the filter!
  • 1U Rackmount Server, 64-Cores up to 225W TDP
  • Dual Socket SP3, AMD EPYC 7003 series processor
  • 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
  • 10x 2.5 Gen4 U.2 hot-swap SSD bays
  • 2x PCI-E 4.0 x16 expansion slots and 2x OCP
  • 2x 1GbE LAN ports via Intel I350-AM2
  • 2x 1200W redundant power supplies (Platinum Level)
From €3,349.00 *
long delivery time
configurator
long delivery time
Gigabyte R282-G30 | Dual Intel Xeon 2U Rack Server Gigabyte R282-G30 2U Rack DP Server 12Bay GPU NVMe

special highlight

Up to 3x double slot GPU cards

  • 2U Rackmount Server, up to 270W CPU TDP
  • Dual Socket P+, Intel Xeon Scalable Processor 3rd Gen
  • 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
  • 12x 3.5 hot-swap SATA/SAS/NVMe drive bays
  • 5x PCI-E Gen4 x16 Expansion slots
  • Support up to 3x dual slot GPU cards (Dual Slot)
  • 2x 2400W redundant power supplies (Platinum Level)
From €3,669.00 *
long delivery time
Details
long delivery time
Gigabyte R282-Z94 | Dual AMD EPYC 2U Rack Server Gigabyte R282-Z94 2-x-2.5 SATA SAS Hot-Swap HDD

special highlight

All-NVMe Server

for PCI-e 4.0 NVMe

 
  • 2U Rackmount, 64-Cores up to 225W TDP
  • Dual Socket SP3, AMD EPYC 7003 series processors
  • 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
  • 26x 2.5 hot-swap drive bays
  • 7x PCI-E 4.0 expansion slots, 2x OCP
  • Hardware-level root of trust support
  • 2x 1600W redundant power supplies (Platinum Level)
From €4,039.00 *
long delivery time
configurator
long delivery time
Supermicro SYS-210SE-31A | Single Xeon 2HE IoT Server Supermicro SYS-210SE-31A 2U Rack Server Intel Xeon

special highlight

All Front-Access 

  • 2U Rackmount Multi Node Server, up to 205W cTDP
  • Single Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
  • 2x M.2 NVMe
  • 1x RJ45 GbE LAN port
  • 1x LP PCI-E 4.0 x16, 2x PCI-E 4.0 x16 FH/HL slots
  • 2x 2000W redundant power supplies
From €4,689.00 *
long delivery time
Details
long delivery time
Supermicro SYS-210SE-31D | 3-Nodes Single Xeon 2U IoT Server Supermicro SYS-210SE-31D Server Intel Xeon 3rd Gen

special highlight

All Front-Access

  • 2U Rackmount Server, up to 205W cTDP, up to 3 Nodes
  • Single Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
  • 2x M.2 NVMe
  • 1x RJ45 GbE LAN port
  • 1x LP PCI-E 4.0 x16, 2x PCI-E 4.0 x16 FH/HL slots
  • 2x 2000W redundant power supplies with PMBus
From €4,879.00 *
long delivery time
Details
long delivery time
Gigabyte R152-P30 | ARM Ampere Altra 1U Server with M128-30 Gigabyte 6NR152P30MR-00 R152-P30 1U ARM Server

special highlight

Ampere Altra M128-30 included

  • 1U Rackmount Server, bis zu 128 Kerne
  • Single Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
  • 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
  • 10x 2.5 Inch hot-swap drive bays
  • 1x PCI-E 4.0 x16 FHHL slot, 1x OCP 2.0
  • 2x M.2 slots with PCI-E 4.0 x4 interface
  • 2x 650W redundant Power Supplies (Platinum Level)
From €5,739.00 *
long delivery time
configurator
  • 2U Rackmount Server, 4-way up to 250W TDP
  • Quad Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 48x DIMM slots, up to 6TB RAM DDR4-3200MHz
  • 8x PCI-E 3.0 x16 Expansion slots and 1x OCP 3.0
  • 2x 10GbE LAN ports via Intel X710-AT2
  • 2x 3200W redundant power supplies (Platinum Level)
From €5,949.00 *
long delivery time
Details
long delivery time
Gigabyte R292-4S0 | 4-Socket Intel Xeon 2U HPC/GPU Server Gigabyte R292-4S0 10x Slim SAS 8i Connectors

special highlight

4-way Quad Xeon 4 GPU Server

  • 2U Rackmount Server, 4-way up to 165W TDP
  • Quad Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 48x DIMM slots, up to 6TB RAM DDR4-3200MHz
  • 10x 2.5 NVMe/SAS/SATA hot-swap drive bays
  • 6x PCI-E 3.0 x16 expansion slots and 1x OCP 3.0 x16 slots
  • Supports up to 4x double slot GPU cards
  • 2x 3200W redundant power supplies (Platinum Level)
From €5,949.00 *
long delivery time
Details
  • 8U Rack SuperBlade Enclosure
  • Up to 20 blade servers
  • Up to 2x 10GbE switches
  • 1 Management module
  • 4x 2200W Power supplies (Titanium Level)
From €6,349.00 *
long delivery time
Details
  • 8U Rack SuperBlade Enclosure
  • Up to 20 blade servers
  • Up to 2x 10GbE switches
  • 1 Management module
  • 8x 2200W Power supplies (Titanium Level)
From €7,769.00 *
long delivery time
Details
long delivery time
Gigabyte G242-P33 | 2U Ampere Altra ARM HPC Server with M128-30 Gigabyte G242-P33 2U Rackmount HPC/GPU ARM Server

special highlight

1x Ampere Altra M128-30 included

  • 2U Rackmount Server, up to 128 Cores
  • Single Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
  • 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
  • 4x 3.5/2.5 Inch hot-swap drive bays
  • 4x PCI-E 4.0 x16 slots, 3x PCI-E Gen4 LP
  • 2x Ultra-Fast M.2 slot
  • 2x 1600W redundant Power Supplies (Platinum Level)
From €8,124.00 *
long delivery time
configurator
  • 2U Rack Server, up to 205W TDP
  • Dual Socket P+, 3rd Gen Intel® Xeon® Scalable
  • 20x DIMM slots (16 DRAM + 4 PMem) up to 4TB RAM DDR4-3200MHz
  • 6x 2.5" NVMe/SATA drive bays
  • 2x PCI-E 4.0 x16 (LP) slot
  • 2x USB 3.0, 1 VGA and 1x RJ45 BMC LAN ports
  • 2x 2600W redundant Power supplies (Titanium Level)
From €8,129.00 *
long delivery time
Details
long delivery time
Supermicro SSG-640SP-E1CR90 | Dual Xeon 4U Storage Server Supermicro SSG-640SP-E1CR90 Tool-less 2x 2600W PSU

special highlight

Dual Node Storage Server
Up to 90 3.5" drive bays

  • 4U Rackmount Server, up to 205W TDP
  • Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
  • 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
  • 90x 3.5 hot-swap SATA3/SAS3 drives bays + 2x 2.5 Slim
  • 3x PCI-E 4.0 x16 slots
  • 6x 8cm hot-swap counter-rotate redundant PWM cooling fans
  • 2x 2600W redundant Power supplies (Titanium Level)
From €10,119.00 *
long delivery time
Details
long delivery time
Gigabyte R182-P91 | ARM Ampere Altra 1U NVMe Server with M128-30 Gigabyte 6NR182P91DR-00 R182-P91 1U ARM Server

special highlight

2x Ampere Altra M128-30 included

  • 1U Rackmount Server, bis zu 128 Kerne
  • Dual Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
  • 32x DIMM slots, up to 4TB RAM DDR4-3200MHz
  • 12x 2.5 Inch NVMe/SATA hot-swap drive bays
  • 2x PCI-E 4.0 x16 slots, 2x OCP 3.0 Gen4 x16
  • 1x Ultra-Fast M.2 with PCI-E 4.0 x4 interface
  • 2x 1300W redundant power supplies (Platinum Level)
From €13,430.00 *
long delivery time
configurator
Please contact our sales team
Do you need help?
Your contact person:
Alexander Hauschild
Sales
2 From 2

Do you need help?

Simply call us or use our inquiry form.

High-Performance Networks - Key Issues When Setting UpHPNs

Deciding which kind of high-performance network is suitable for your infrastructure can usually be easily narrowed down with the following questions:

  • How many different high-performance networks are required?
  • How many computer systems should be connected to the respective network?
  • How many data per second are transported in the respective network?
  • How many simultaneous connections are required by a system?
  • What bandwidth must the high-performance network have?
  • For which application do you need the respective network?
  • Which implementations are available for which operating systems?
  • Does the company have the knowledge to operate a high-performance network?
  • What size of the budget is available for the project?

High-performance network architectures and their performance features


When considering High-Performance Networking, there are two different architectures that are explained in more detail below:

TCP/IP networks based on Ethernet

A TCP/IP-based Ethernet is a packet-oriented network. The data packets have up to 3 IP addresses - the owner IP address, the IP address of the receiver, and the IP address of the router. 

  • How a high-performance network works with TCP/IP
    Every data packet sent via Ethernet has several checksums. Since a message can be longer than the maximum size of a TCP/IP data packet, each packet gets its own description that defines the order of the data packet in the message. This makes it possible to request exactly the same data packet from the sender again. 

    The data packet is transferred to the transmission medium. It may happen that a data packet is already present there. Both data packets are changed in the process. Due to the routes in the network, it is not possible to say exactly when and where this can happen. Owing to this and the checksums, the latency of the network is usually in the millisecond range.
  • Technical details about TCP/IP networks
    As the TCP/IP protocol is very CPU-intensive, RDMA services have been directly integrated into the host bus adapter. These services are natively supported by Hostbus adapters that use the iWARP/RDMA protocol or have the RoCEv2 specification (IBTA). This results in higher performance, better bandwidth, and lower latency. 

    TCP/IP-based network adaptors are supported by all major operating systems. TCP/IP networks support the following bandwidths: 1GbE / 10GbE / 25GbE / 40GbE / 50GbE / 56GbE / 100GbE / 200GbE. Please note that this does not apply to all providers. All applications or services that use the network are now based on TCP/IP.

High-Performance Network with Infiniband Architecture

Infiniband was developed with the aim of having a hardware interface that allows serial data transport and low latency. 

  • How a High-Performance Network works with Infiniband
    In contrast to TCP/IP, which handles the protocol stack with the CPU, Infiniband outsources this to the network hardware. This means that several connections to different systems can be established simultaneously via the high-performance network. Low latency is achieved by the fact that the network cards, referred to as host bus adaptors, can address the memory of the receiving system directly. This technology is called RDMA Write or RDMA Read. Based on this technology, applications can exchange messages using Message Passing Interface software (MPI software). 
  • Technical details about networks with Infiniband
    The current peak performance for this technology is 200Gb/s with a latency of 600 nanoseconds. This allows up to 200 million messages per second. The HBA provides up to 16 million I/O channels and the MTU (Message Transport Unit) ranges from 256 to 4 kByte. In a storage environment, Infiniband is used to implement NVMe via network fabrics.

Current High-Performance Network Options on the Market

One of the best-known suppliers of network hardware is Mellanox. They can supply extremely powerful network components to meet the highest demands. Their host bus adaptors support speeds from 10 GbE(QDR) up to 200 GbE (HDR) with a latency time of 6 nanoseconds, meaning that several thousand nodes can be realized in a network. For the software, an optimized version of the Open Fabrics Alliance software is used.

Mellanox also offers its own drivers, protocol software, and tools for the following operating systems:

  • RedHat Version 6.x / Version 7.x / CentOS 6.x / CentOS 7.x
  • SUSE SLES 11 SP3 / SP4, SLES 12 SP1 - SP3, SLES 15
  • Oracle Linux OL 6.x and 7.x
  • Ubuntu 16.x / 17.10 / 18.04
  • Fedora 20 / 27
  • Debian 8.x / 9.x
  • EulerOS 2.0 SP3
  • WindRiver 6.0
  • XenServer 6.5
  • Microsoft Windows Server 2016 / Windows Server 2016 version 1803 / Windows Server 2012 R2 / Windows Server 2012 / Windows 10 Client 1803 / Windows 8.1
  • VMware ESX/ESXI 5.x
  • KVM / XEN

The supported CPU architecture ranges from x86_64 to PowerPC to ARM-based CPUs.

 

Omni-Path Fabric made by INTEL: High-Performance Networks at the Cutting Edge of Technology

In very large installations with ten thousand nodes or beyond, even Infiniband meets its limits. The performance of the processors and the bandwidth of the memory scale faster than that of the I/O. This is where Intel comes in with its Omni-Path Fabrics, developed with Qlogic's Infiniband implementation and Cray's expertise.

Intel's design can integrate the Omni-Path Controller chipset into the processor. This saves one PCI slot, which in turn means several watts less power consumption.

Additionally, the quality of the transmission can be further improved by various optimizations of the protocol, such as:

  • Packet Integrity Protection
  • Dynamic Lane Scaling
  • Traffic Flow Optimization

The software Intel provides for their Omni-Path Fabric is available for RedHat, SUSE Enterprise Server, CentOS, and Scientific Linux. Intel's software is open source and based on the Open Fabrics Alliance.

Equipping High-Performance Networks Properly: Switches and Cables

When designing the Infiniband and Omni-Path network, you have to make sure that each port of a system can directly reach another port of another system. A switch can prevent multiple messages from occupying a single line within the high-performance network, which could block some connections for a short time.

For this purpose, Infiniband and Omni-Path provide the non-blocking factor. The store-and-forward principle used with TCP/IP-based networks leads to a noticeable drop in performance with Infiniband and Omni-Path.

Since the transmission speeds on the network are very high, RJ45 cables are not suitable in this case. The cables used for these fast connections are based on the QSFPx form factor. The x indicates the different connectors that must be used depending on speed.

Due to these aspects, it is important to consider later extensions and scalability options when setting up a high-performance network.

TypeSpeedQSFP Form FactorLength in Meters
Copper Ethernet 400Gb/s QSFF-DD 0,5;1;1,5;2;2,5;3
Copper Ethernet 200Gb/s QSFP56 0,5;1;1,5;2;2,5;3
Copper Ethernet 100Gb/s QSFP28 0,5;1;1,5;2;2,5;3
Copper Ethernet 40Gb/s QSFP+ 0,5;1;1,5;2;2,5;3;4;5;6;7
Copper Ethernet 25Gb/s SFP28 0,5;1;1,5;2;2,5;3;4;5
Copper Ethernet 10Gb/s SFP+ 0,5;1;1,5;2;2,5;3;4;5;6;7
       

 

Example for copper-based Ethernet connection

Cables up to a length of 7 meters can still be made of copper; for lengths of more than 7 meters, you will have to use cables with optical fibres.

InfinibandSpeedForm FactorLenght
HDR 200Gb/s QSFP56 0,5;1;1,5;2;2,5;3
EDR 100Gb/s QSFP28 0,5;1;1,5;2;2,5;3;4;5
FDR 56Gb/s QSFP+ 0,5;1;1,5;2;2,5;3;4;5
FDR10; QDR 40Gb/s QSFP+ 0,5;1;1,5;2;2,5;3;4;5;6;7
       

Examples for copper-based Infiniband connections

Implement High-Performance Network Projects with HAPPYWARE


Since 1999, HAPPYWARE has been reliable and competent when it comes to implementing the most diverse IT projects. From simple barebone systems to complete server setups and virtualization services, you can count on the expertise of our company.

We are happy to give individual advice. Our team of IT specialists can analyze the IT requirements of your company and are able to offer expert assistance during the planning phase. We supply our services exactly to your needs and therefore ensure that you receive tailor-made performance with the best conditions.

On request, we can also provide turnkey systems as well as any components for the high-performance network of your choice.

If you have additional questions or you are interested in our services for high-performance networks,  please contact our HPC expert Mr. Jürgen Kabelitz, he will be happy to answer any questions you may have.