Choose language or region
Happyware DE - EN Sprachshop
Worldwide delivery & support
Up to 6 years warranty
On-site repair service

High-Performance Networks

If a company or institution uses high-performance computers, this poses a challenge for the local networks. This is because the network is crucial for communication between high-performance computer applications and for data transport. In order for the networks to cope with the data throughput of the computers, they need to meet special requirements. HAPPYWARE can provide these networks for you.

A powerful infrastructure for computers and clusters — with our high-performance networks, the full bandwidth of your network is always available to you for all HPC work processes.

One of the important factors for communication between the high-performance computers is low latency and fast transmission of the 2 byte to 1 MB messages. Since the programmes in an HPC cluster send and receive messages via several high-performance computers at the same time, it becomes clear that the network architectures needs to be tailored to this problem.

If a company or institution uses high-performance computers, this poses a challenge for the local networks. This is because the network is crucial for communication between high-performance... read more »
Close window
High-Performance Networks

If a company or institution uses high-performance computers, this poses a challenge for the local networks. This is because the network is crucial for communication between high-performance computer applications and for data transport. In order for the networks to cope with the data throughput of the computers, they need to meet special requirements. HAPPYWARE can provide these networks for you.

A powerful infrastructure for computers and clusters — with our high-performance networks, the full bandwidth of your network is always available to you for all HPC work processes.

One of the important factors for communication between the high-performance computers is low latency and fast transmission of the 2 byte to 1 MB messages. Since the programmes in an HPC cluster send and receive messages via several high-performance computers at the same time, it becomes clear that the network architectures needs to be tailored to this problem.

Here you'll find our  High Performance Networks

Close filters
 
  •  
  •  
from to
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
No results were found for the filter!
SYS-1029U-E1CR25M | Supermicro Dual Intel Xeon 1U Rack Server SYS-1029U-E1CR25M Server

special highlight

Up to 6TB RAM

25GbE SFP28 on Board

  • 1U Rack Server, 205W TDP
  • Dual Intel Xeon Scalable CPU, 2nd Gen.
  • Up to 6TB RAM, DDR4-2933MHz ECC
  • 10x Hot-swap 2.5 drive bays
  • 2x SFP28 25GbE ports
  • 750W Redundant power supplies (Platinum Level)
From €1,670.00 *
Details
  • 3HE Rack MicroBlade Enclosure
  • Up to 14 Hot-swap server blades
  • Up to 2 Hot-swap 10G ethernet switches
  • 1 Hot-swap management module optional
  • 4x Huge cooling fans
  • 2000W Redundant power supplies
From €1,920.00 *
Details
SYS-2029U-TN24R4T | Supermicro Dual Xeon 2U Rack Storage Server SYS-2029U-TN24R4T Server

special highlight

All-NVMe Storage Server

24x NVMe U.2 on 2U

  • 2U Rack Server, 205W TDP
  • Dual Intel Xeon Scalable Processors, 2nd Gen.
  • Up to 6TB RAM, DDR4-2933MHz ECC
  • 24x Hot-swap 2.5 NVMe drive bays
  • 4x 10GbE LAN ports
  • 1600W Redundant power supplies (Titanium Level)
From €3,540.00 *
Details
AS-2123US-TN24R25M | Supermicro 2U Rack Dual AMD EPYC Server AS-2123US-TN24R25M Server

special highlight

24x NVMe U.2, Dual 25G Ethernet

  • 2U Rack Server, 225W TDP
  • Dual AMD EPYC CPU, 7002 series
  • Up to 8TB RAM, DDR4-3200MHz ECC
  • 2x SFP28 25GbE LAN ports
  • 24x Hot-swap 2.5 U.2 NVMe drive bays
  • 1600W Redundant power supplies (Titanium Level)
From €3,140.00 *
Details
  • 2U Rack Server
  • 4x nodes
  • Dual ARM ThunderX CPU
  • Up to 512GB RAM, DDDR-2133MHz ECC
  • 2x QSFP+ 40GbE LAN ports
  • 4x Hot-swap 2.5 drive bays
  • 1600W Redundant power supplies (Platinum Level)
From €9,660.00 *
Details
  • 1U Rack Server
  • ARM ThunderX CPU
  • Up to 256GB RAM, DDR4-2133MHz
  • 1x QSFP+ 40GbE port
  • 4x SFP+ 10GbE port
  • 4x Hot-swap 3.5 drive bays
  • 400W Power supply (Gold Level)
From €1,940.00 *
Details
  • 8U Rack SuperBlade Enclosure
  • Up to 20 blade servers
  • Up to 2x 10GbE switches
  • 1 Management module
  • 4x 2200W Power supplies (Titanium Level)
From €4,510.00 *
Details
  • 8U Rack SuperBlade Enclosure
  • Up to 20 blade servers
  • Up to 2x 10GbE switches
  • 1 Management module
  • 8x 2200W Power supplies (Titanium Level)
From €5,470.00 *
Details

Do you need help?

Simply call us or use our inquiry form.

High-Performance Networks - Key Issues When Setting UpHPNs

Deciding which kind of high performance network is suitable for your infrastructure can usually be easily narrowed down with the following questions:

  • How many different high-performance networks are required?
  • How many computer systems should be connected to the respective network?
  • How many data per second are transported in the respective network?
  • How many simultaneous connections are required by a system?
  • What bandwidth must the high-performance network have?
  • For which application do you need the respective network?
  • Which implementations are available for which operating systems?
  • Does the company have the knowkedge to operate a high-performance network?
  • What size of budget is available for the project?

High-performance network architectures and their performance features


When considering High-Performance Networking, there are two different architectures that are explained in more detail below:

TCP/IP networks based on Ethernet

A TCP/IP-based Ethernet is a packet-oriented network. The data packets have up to 3 IP addresses - the owner IP address, the IP address of the receiver, and the IP address of the router. 

  • How a high-performance network works with TCP/IP
    Every data packet sent via Ethernet has several checksums. Since a message can be longer than the maximum size of a TCP/IP data packet, each packet gets its own description that defines the order of the data packet in the message. This makes it possible to request exactly the same data packet from the sender again. 

    The data packet is transferred to the transmission medium. It may happen that a data packet is already present there. Both data packets are changed in the process. Due to the routes in the network, it is not possible to say exactly when and where this can happen. Owing to this and the checksums, the latency of the network is usually in the millisecond range.
  • Technical details about TCP/IP networks
    As the TCP/IP protocol is very CPU-intensive, RDMA services have been directly integrated into the host bus adapter. These services are natively supported by Hostbus adapters that use the iWARP/RDMA protocol or have the RoCEv2 specification (IBTA). This results in higher performance, better bandwidth, and lower latency. 

    TCP/IP-based network adaptors are supported by all major operating systems. TCP/IP networks support the following bandwidths: 1GbE / 10GbE / 25GbE / 40GbE / 50GbE / 56GbE / 100GbE / 200GbE. Please note that this does not apply to all providers. All applications or services that use the network are now based on TCP/IP.

High-Performance Network with Infiniband Architecture

Infiniband was developed with the aim of having a hardware interface that allows serial data transport and low latency. 

  • How a High-Performance Network works with Infiniband
    In contrast to TCP/IP, which handles the protocol stack with the CPU, Infiniband outsources this to the network hardware. This means that several connections to different systems can be established simultaneously via the high-performance network. Low latency is achieved by the fact that the network cards, referred to as host bus adaptors, can address the memory of the receiving system directly. This technology is called RDMA Write or RDMA Read. Based on this technology, applications can exchange messages using Message Passing Interface software (MPI software). 
  • Technical details about networks with Infiniband
    The current peak performance for this technology is 200Gb/s with a latency of 600 nanoseconds. This allows up to 200 million messages per second. The HBA provides up to 16 million I/O channels and the MTU (Message Transport Unit) ranges from 256 to 4 kByte. In a storage environment, Infiniband is used to implement NVMe via network fabrics.

Current High-Performance Network Options on the Market

One of the best known suppliers of network hardware is Mellanox. They can supply extremely powerful network components to meet the highest demands. Their host bus adaptors support speeds from 10 GbE(QDR) up to 200 GbE (HDR) with a latency time of 6 nanoseconds, meaning that several thousand nodes can be realised in a network. For the software, an optimised version of the Open Fabrics Alliance software is used.

Mellanox also offers its own drivers, protocol software, and tools for the following operating systems:

  • RedHat Version 6.x / Version 7.x / CentOS 6.x / CentOS 7.x
  • SUSE SLES 11 SP3 / SP4, SLES 12 SP1 - SP3, SLES 15
  • Oracle Linux OL 6.x and 7.x
  • Ubuntu 16.x / 17.10 / 18.04
  • Fedora 20 / 27
  • Debian 8.x / 9.x
  • EulerOS 2.0 SP3
  • WindRiver 6.0
  • XenServer 6.5
  • Microsoft Windows Server 2016 / Windows Server 2016 version 1803 / Windows Server 2012 R2 / Windows Server 2012 / Windows 10 Client 1803 / Windows 8.1
  • VMware ESX/ESXI 5.x
  • KVM / XEN

The supported CPU architecture ranges from x86_64 to PowerPC to ARM-based CPUs.

 

Omni-Path Fabric made by INTEL: High-Performance Networks at the Cutting Edge of Technology

In very large installations with ten thousand nodes or beyond, even Infiniband meets its limits. The performance of the processors and the bandwidth of the memory scale faster than that of the I/O. This is where Intel comes in with its Omni-Path Fabrics, developed with Qlogic's Infiniband implementation and Cray's expertise.

Intel's design can integrate the Omni-Path Controller chipset into the processor. This saves one PCI slot, which in turn means several watts less power consumption.

Additionally the quality of the transmission can be further improved by various optimisations of the protocol, such as:

  • Packet Integrity Protection
  • Dynamic Lane Scaling
  • Traffic Flow Optimization

The software Intel provides for their Omni-Path Fabric is available for RedHat, SUSE Enterprise Server, CentOS, and Scientific Linux. Intel's software is open source and based on the Open Fabrics Alliance.

Equipping High-Performance Networks Properly: Switches and Cables

When designing the Infiniband and Omni-Path network, you have to make sure that each port of a system can directly reach another port of another system. A switch can prevent multiple messages from occupying a single line within the high performance network, which could block some connections for a short time.

For this purpose, Infiniband and Omni-Path provide the non-blocking factor. The store-and-forward principle used with TCP/IP-based networks leads to a noticeable drop in performance with Infiniband and Omni-Path.

Since the transmission speeds on the network are very high, RJ45 cables are not suitable in this case. The cables used for these fast connections are based on the QSFPx form factor. The x indicates the different connectors that must be used depending on speed.

Due to these aspects, it is important to consider later extensions and scalability options when setting up a high-performance network.

TypeSpeedQSFP Form FactorLength in Meters
Copper Ethernet 400Gb/s QSFF-DD 0,5;1;1,5;2;2,5;3
Copper Ethernet 200Gb/s QSFP56 0,5;1;1,5;2;2,5;3
Copper Ethernet 100Gb/s QSFP28 0,5;1;1,5;2;2,5;3
Copper Ethernet 40Gb/s QSFP+ 0,5;1;1,5;2;2,5;3;4;5;6;7
Copper Ethernet 25Gb/s SFP28 0,5;1;1,5;2;2,5;3;4;5
Copper Ethernet 10Gb/s SFP+ 0,5;1;1,5;2;2,5;3;4;5;6;7
       

 

Example for a copper-based Ethernet connection

Cables up to a length of 7 meters can still be made of copper; for lengths of more than 7 meters, you will have to use cables with optical fibres.

InfinibandSpeedForm FactorLenght
HDR 200Gb/s QSFP56 0,5;1;1,5;2;2,5;3
EDR 100Gb/s QSFP28 0,5;1;1,5;2;2,5;3;4;5
FDR 56Gb/s QSFP+ 0,5;1;1,5;2;2,5;3;4;5
FDR10; QDR 40Gb/s QSFP+ 0,5;1;1,5;2;2,5;3;4;5;6;7
       

Examples for copper-based Infiniband connections

Implement High-Performance Network Projects with HAPPYWARE


Since 1999, HAPPYWARE has been reliable and competent when it comes to implementing the most diverse IT projects. From simple barebone systems to complete server setups and virtualisation services, you can count on the expertise of our company.

We are happy to give individual advice. Our team of IT specialists can analyse the IT requirements of your company and are able to offer expert assistance during the planning phase. We supply our services exactly to your needs and therefore ensure that you receive tailor-made performance with the best conditions.

On request we can also provide turnkey systems as well as any components for the high-performance network of your choice.

If you have additional questions or you are interested in our services for high-performance networks,  please contact our HPC expert Jürgen - on +49 (0)4181 235770, he will be happy to answer any questions you may have.

Viewed