special highlight
1x Ampere Altra M128-30 included
- 2U Rackmount Server, bis zu 128 Kerne
- Single Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 4x 2.5 Inch NVMe hot-swap drive bays
- 5x PCI-E 4.0 x16 slots, 1x PCI-E Gen4 AIOM
- 1x Ultra-Fast M.2
- 2x 1600W redundant Power supplies (Titanium Level)
special highlight
2x Ampere Altra M128-30 included
- 2U Rackmount Server, up to 128 Cores
- Dual Ampere Altra Max CPU, 128 Arm 3 Ghz v8.2+ 64bit
- 32x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 28x 2.5 Inch hot-swap drive bays
- 4x PCI-E 4.0 x16 slots, 2x OCP 3.0 Gen4 x16
- 1x Ultra-Fast M.2 slot
- 2x 1600W Power Supplies (Platinum Level)
- 2U Rackmount Server, up to 225W TDP
- Dual Socket SP3, AMD EPYC 7003 CPU Series
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 12x 3.5 and 2x 2.5 hot-swap SAS/SATA drive bays
- 8x PCI-E Gen4 x16/x8 and 2x OCP Mezzanine slots
- 2x 1Gb/s LAN ports via Intel® I350-AM2
- 2x 1200W redundant power supplies (Platinum Level)
- 2U Rackmount Server, 350W cTDP
- Single Socket E, 5th/4th Gen Intel Xeon Scalable Processors
- 16x DIMM slots, up to 4TB RAM DDR5-4800MHz
- 12x 3.5 hot-swap drive bays, 2x M.2 slot
- Networking via AIOM & AOC
- 6x PCI-E Gen5 Expansion slots, 1x AIOM OCP 3.0
- 2x 1200W redundant Power Supplies 80+ (Titanium Level)
special highlight
32 DIMMs, up to 8TB
- 2U Rackmount Server, up to 270W cTDP
- 24x 2.5 SATA/SAS hot-swap drive bays, 2x 2.5 rear
- 8x PCI-E Gen4 x16 Expansion slots and 2x OCP
- 2x 1GbE LAN ports via Intel I350-AM2
- 2x 1600W redundant power supplies (Platinum Level)
special highlight
1U 10-Bay Gen4 NVMe
- 1U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 10x 2.5 hot-swap SATA/SAS/NVMe drive bays
- 2x PCI-E 4.0 x16 Expansion slots and 2x OCP Mezzanine slots
- 2x 1Gb/s LAN ports
- 2x 1300W Redundant Power Supplies (Platinum Level)
- 1U Rackmount Server, up to 270W TDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 10x 2.5 SATA/SAS/NVMe hot-swap drive bays
- 2x 1GbE LAN ports via Intel I350-AM2
- 2x PCI-E Gen4 x16 Expansion slots and 2x OCP
- 2x 1300W redundant power supplies (Platinum Level)
special highlight
40 x 2.5" SATA/SAS hot-swap SSD Slots on 2U
- 2U Rackmount Server, up to 240W cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processors (Milan)
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 40x 2.5 SATA/SAS hot-swap SSD bays, 8x 2.5 Internal
- 2x PCI-E Gen4 x16 expansion slots and 2x OCP
- 2x 1GbE LAN ports via Intel® I350-AM2
- 2x 1200W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 240W cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processor
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 14x hot-swap drive bays (12x 3.5 front & 2x 2.5 rear)
- 2x 1GbE LAN ports
- 9x PCI-E expansion slots & 2x OCP Mezzanine
- 2x 1200W redundant Power supplies (Platinum Level)
special highlight
All-NVMe Storage Server
- 2U Rackmount Server, up to 205W TDP
- Dual Socket P, 2nd Gen Intel Xeon Scalable Processors
- 24x DIMM slots, up to 6TB RAM DDR4-2933MHz ECC
- 24x 2.5 Inch hot-swap NVMe drive bays
- 2x PCI-E 3.0 x16 and 1x PCI-E 3.0 x8 slots
- 4x 10GBase-T LAN ports with Intel X550
- 2x 1600W redundant power supplies (Titanium Level)
special highlight
Tiered Storage
- 4U Rackmount Server, up to 280W cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processor
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 50x 3.5/2.5 Inch hot-swap drive bays
- 2x 1GbE LAN ports
- 6x PCI-E Expansion slots and 1x OCP 2.0
- 2x 1600W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 225W TDP
- Single Socket SP3, AMD EPYC 7003 Series Processor
- 8x DIMM slots, up to 2TB RAM DDR4-3200MHz
- 12x 3.5 SATA/SAS and 2x 2.5 SATA/SAS drive bays
- 5x PCI-E Gen3 slots and 1x M.2
- 2x 1Gb/s LAN ports via Intel® I210-AT
- 2x 800W redundant Power Supplies (Platinum Level)
special highlight
16 DIMMs, up to 4TB RAM
- 1U Rackmount Server, up to 280W TDP
- Single Socket SP3, AMD EPYC 7003 Series Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 10x 2.5 Inch hot-swap SATA3/SAS3/NVMe drive bays
- 2x PCI-E 4.0 x16 slots and 2x PCI-E 4.00 x16 AIOM
- Networking provided via AIOM
- 860W redundant power supplies (Platinum Level)
special highlight
Up to 4x double slots GPU cards
- High performance tower workstation / E-Business-Server
- Single Socket, AMD EPYC 7003 Series Processor
- 8x DIMM Slots, up to 2TB RAM DDR4-3200MHz
- 2x 1GbE LAN Ports via Intel I210-AT
- 4x PCI-E x16 Gen3 and 1x PCI-E x8 Gen3
- 4x 3.5/2.5 Inch SATA hot-swap drive bays
- 2x 1600W redundant power supplies (Platinum Level)
- 2U Rackmount Server, bis zu 280W TDP
- Single Socket SP3, AMD EPYC 7003 Series Processors
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 12x 3.5 hot-swap drive bays
- 4x PCI-E 4.0 x16 slots & 2x PCI-E 4.0 x16 AIOM
- Tool-less Drive Trays and Tool-less Brackets
- 920W redundant power supplies (Platinum Level)
- 1U Rack Server, supports 205W TDP
- Dual Socket P, 2nd Gen Intel Xeon Scalable processors
- 24x DIMM slots, up to 6TB RAM DDR4-2933MHz
- 10x 2.5 Inch hot-swap drive bays
- 2x PCI-E 3.0 x16, 1x PCI-E 3.0 x8 & 1x PCI-E 3.0 x16 slots
- 2x 25GbE SFP28 Ethernet ports
- 2x 750W redundant power supplies (Platinum Level)
special highlight
Supports GRAID SupremeRAID SR-1000 NVMe
- 1U Rackmount Server, up to 240W cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processor
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 10x 2.5 hot-swap drive bays
- 2x 1GbE LAN ports
- 3x PCI-E 4.0 slots and 2x OCP (1x Gen3 & 1x Gen2)
- 2x 800W redundant power supplies 80 PLUS (Platinum Level)
- 1U Rackmount Server, 10nm technology up to 270W TDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4 -3200MHz
- 4x 3.5 or 2.5 SATA/SAS hot-swap bays
- 2x PCI-E Gen4 x16 Expansion slots and 2x OCP
- 2x 1GbE LAN ports via Intel I350-AM2
- 2x 1300W redundant power supplies (Platinum Level)
- 2U Rackmount Server, up to 350W CPU TDP
- Single Socket E, Intel Xeon 5th/4th Gen Scalable Processors
- 8x DIMM slots, up to 2TB RAM DDR5-5600MHz
- 2x 2.5 Inch hot-swap drive bays
- 4x PCI-E Gen5 Expansion slots
- 2x 10GbE RJ45 LAN ports
- 2x 600W redundant Power Supplies Typical 90%+
special highlight
Supports GRAID SupremeRAID NVMe/NVMe-oF RAID Card
- 1U Rackmount Server, up to 270W TDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 10x 2.5 Inch hot-swap drive bays
- 2x PCI-E Gen4 x16 Expansion slots and 2x OCP
- Supports Dual ROM technology with Intel SATA RAID 0, 1, 10, 5
- 2x 1300W redundant power supplies (Platinum Level)
special highlight
Supports GRAID SupremeRAID NVMe/NVMe-oF RAID Card
- 1U Rackmount Server, up to 270W TDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 4x 3.5 SATA/SAS hot-swap and 4x 2.5 drive bays
- Supports Intel SATA RAID and VROC 0, 1, 10, 5
- 2x PCI-E Gen4 x16 Expansion slots and 2x OCP
- 2x 1300W redundant power supplies (Platinum Level)
special highlight
14 Server blades on 3U
- 3HE Rack MicroBlade Enclosure
- Up to 14 Hot-swap server blades
- Up to 2 Hot-swap 10G ethernet switches
- 1 Hot-swap management module optional
- 4x Huge cooling fans
- 2000W Redundant power supplies
- 2U Rackmount Server, up to 240W cTDP
- Single Socket SP3, AMD EPYC 7003 Series Processor
- 16x DIMM slots, up to 4TB RAM DDR4-3200MHz
- 14x hot-swap drive bays (12x 3.5 and 2x 2.5)
- 2x 1GbE LAN ports
- 8x PCI-E expansion slots & 2x OCP Mezzanine
- 2x 1600W redundant power supplies 80 PLUS (Platinum Level)
- 2U Rackmount Server, up to 270W cTDP
- Dual Socket P+, 3rd Gen Intel Xeon Scalable Processors
- 32x DIMM slots, up to 8TB RAM DDR4-3200MHz
- 12x 3.5 SATA/SAS/NVMe hot-swap drive bays
- 8x PCI-E 4.0 x16 Expansion slots and 2x OCP
- 2x 1GbE LAN ports via Intel I350-AM2
- 2x 1600W redundant power supplies (Platinum Level)
Do you need help?
Simply call us or use our inquiry form.
High-Performance Networks - Key Issues When Setting UpHPNs
Deciding which kind of high-performance network is suitable for your infrastructure can usually be easily narrowed down with the following questions:
- How many different high-performance networks are required?
- How many computer systems should be connected to the respective network?
- How many data per second are transported in the respective network?
- How many simultaneous connections are required by a system?
- What bandwidth must the high-performance network have?
- For which application do you need the respective network?
- Which implementations are available for which operating systems?
- Does the company have the knowledge to operate a high-performance network?
- What size of the budget is available for the project?
High-performance network architectures and their performance features
When considering High-Performance Networking, there are two different architectures that are explained in more detail below:
TCP/IP networks based on Ethernet
A TCP/IP-based Ethernet is a packet-oriented network. The data packets have up to 3 IP addresses - the owner IP address, the IP address of the receiver, and the IP address of the router.
- How a high-performance network works with TCP/IP
Every data packet sent via Ethernet has several checksums. Since a message can be longer than the maximum size of a TCP/IP data packet, each packet gets its own description that defines the order of the data packet in the message. This makes it possible to request exactly the same data packet from the sender again.
The data packet is transferred to the transmission medium. It may happen that a data packet is already present there. Both data packets are changed in the process. Due to the routes in the network, it is not possible to say exactly when and where this can happen. Owing to this and the checksums, the latency of the network is usually in the millisecond range. - Technical details about TCP/IP networks
As the TCP/IP protocol is very CPU-intensive, RDMA services have been directly integrated into the host bus adapter. These services are natively supported by Hostbus adapters that use the iWARP/RDMA protocol or have the RoCEv2 specification (IBTA). This results in higher performance, better bandwidth, and lower latency.
TCP/IP-based network adaptors are supported by all major operating systems. TCP/IP networks support the following bandwidths: 1GbE / 10GbE / 25GbE / 40GbE / 50GbE / 56GbE / 100GbE / 200GbE. Please note that this does not apply to all providers. All applications or services that use the network are now based on TCP/IP.
High-Performance Network with Infiniband Architecture
Infiniband was developed with the aim of having a hardware interface that allows serial data transport and low latency.
- How a High-Performance Network works with Infiniband
In contrast to TCP/IP, which handles the protocol stack with the CPU, Infiniband outsources this to the network hardware. This means that several connections to different systems can be established simultaneously via the high-performance network. Low latency is achieved by the fact that the network cards, referred to as host bus adaptors, can address the memory of the receiving system directly. This technology is called RDMA Write or RDMA Read. Based on this technology, applications can exchange messages using Message Passing Interface software (MPI software). - Technical details about networks with Infiniband
The current peak performance for this technology is 200Gb/s with a latency of 600 nanoseconds. This allows up to 200 million messages per second. The HBA provides up to 16 million I/O channels and the MTU (Message Transport Unit) ranges from 256 to 4 kByte. In a storage environment, Infiniband is used to implement NVMe via network fabrics.
Current High-Performance Network Options on the Market
One of the best-known suppliers of network hardware is Mellanox. They can supply extremely powerful network components to meet the highest demands. Their host bus adaptors support speeds from 10 GbE(QDR) up to 200 GbE (HDR) with a latency time of 6 nanoseconds, meaning that several thousand nodes can be realized in a network. For the software, an optimized version of the Open Fabrics Alliance software is used.
Mellanox also offers its own drivers, protocol software, and tools for the following operating systems:
- RedHat Version 6.x / Version 7.x / CentOS 6.x / CentOS 7.x
- SUSE SLES 11 SP3 / SP4, SLES 12 SP1 - SP3, SLES 15
- Oracle Linux OL 6.x and 7.x
- Ubuntu 16.x / 17.10 / 18.04
- Fedora 20 / 27
- Debian 8.x / 9.x
- EulerOS 2.0 SP3
- WindRiver 6.0
- XenServer 6.5
- Microsoft Windows Server 2016 / Windows Server 2016 version 1803 / Windows Server 2012 R2 / Windows Server 2012 / Windows 10 Client 1803 / Windows 8.1
- VMware ESX/ESXI 5.x
- KVM / XEN
The supported CPU architecture ranges from x86_64 to PowerPC to ARM-based CPUs.
Omni-Path Fabric made by INTEL: High-Performance Networks at the Cutting Edge of Technology
In very large installations with ten thousand nodes or beyond, even Infiniband meets its limits. The performance of the processors and the bandwidth of the memory scale faster than that of the I/O. This is where Intel comes in with its Omni-Path Fabrics, developed with Qlogic's Infiniband implementation and Cray's expertise.
Intel's design can integrate the Omni-Path Controller chipset into the processor. This saves one PCI slot, which in turn means several watts less power consumption.
Additionally, the quality of the transmission can be further improved by various optimizations of the protocol, such as:
- Packet Integrity Protection
- Dynamic Lane Scaling
- Traffic Flow Optimization
The software Intel provides for their Omni-Path Fabric is available for RedHat, SUSE Enterprise Server, CentOS, and Scientific Linux. Intel's software is open source and based on the Open Fabrics Alliance.
Equipping High-Performance Networks Properly: Switches and Cables
When designing the Infiniband and Omni-Path network, you have to make sure that each port of a system can directly reach another port of another system. A switch can prevent multiple messages from occupying a single line within the high-performance network, which could block some connections for a short time.
For this purpose, Infiniband and Omni-Path provide the non-blocking factor. The store-and-forward principle used with TCP/IP-based networks leads to a noticeable drop in performance with Infiniband and Omni-Path.
Since the transmission speeds on the network are very high, RJ45 cables are not suitable in this case. The cables used for these fast connections are based on the QSFPx form factor. The x indicates the different connectors that must be used depending on speed.
Due to these aspects, it is important to consider later extensions and scalability options when setting up a high-performance network.
Type | Speed | QSFP Form Factor | Length in Meters |
---|---|---|---|
Copper Ethernet | 400Gb/s | QSFF-DD | 0,5;1;1,5;2;2,5;3 |
Copper Ethernet | 200Gb/s | QSFP56 | 0,5;1;1,5;2;2,5;3 |
Copper Ethernet | 100Gb/s | QSFP28 | 0,5;1;1,5;2;2,5;3 |
Copper Ethernet | 40Gb/s | QSFP+ | 0,5;1;1,5;2;2,5;3;4;5;6;7 |
Copper Ethernet | 25Gb/s | SFP28 | 0,5;1;1,5;2;2,5;3;4;5 |
Copper Ethernet | 10Gb/s | SFP+ | 0,5;1;1,5;2;2,5;3;4;5;6;7 |
Example for copper-based Ethernet connection
Cables up to a length of 7 meters can still be made of copper; for lengths of more than 7 meters, you will have to use cables with optical fibres.
Infiniband | Speed | Form Factor | Lenght |
---|---|---|---|
HDR | 200Gb/s | QSFP56 | 0,5;1;1,5;2;2,5;3 |
EDR | 100Gb/s | QSFP28 | 0,5;1;1,5;2;2,5;3;4;5 |
FDR | 56Gb/s | QSFP+ | 0,5;1;1,5;2;2,5;3;4;5 |
FDR10; QDR | 40Gb/s | QSFP+ | 0,5;1;1,5;2;2,5;3;4;5;6;7 |
Examples for copper-based Infiniband connections
Implement High-Performance Network Projects with HAPPYWARE
Since 1999, HAPPYWARE has been reliable and competent when it comes to implementing the most diverse IT projects. From simple barebone systems to complete server setups and virtualization services, you can count on the expertise of our company.
We are happy to give individual advice. Our team of IT specialists can analyze the IT requirements of your company and are able to offer expert assistance during the planning phase. We supply our services exactly to your needs and therefore ensure that you receive tailor-made performance with the best conditions.
On request, we can also provide turnkey systems as well as any components for the high-performance network of your choice.
If you have additional questions or you are interested in our services for high-performance networks, please contact our HPC expert Mr. Jürgen Kabelitz, he will be happy to answer any questions you may have.