Updated on 2025-04-30 GMT+08:00

Ultra-high Performance Computing ECSs

Overview

Ultra-high performance computing ECSs are designed to meet high-end computational needs, such as industrial simulation, molecular modeling, and computational fluid dynamics. In addition to the substantial CPU power, the ultra-high performance computing ECSs offer diverse options for low-latency RDMA networking using EDR InfiniBand NICs to support memory-intensive computational requirements.

Hyper-threading is disabled for this type of ECSs by default. Each CPU core has one vCPU.

Available flavors are listed in the following table.

Available now: H2

Table 1 Ultra-high performance computing ECS features

Series

Compute

Disk Type

Network

H2

  • vCPU to memory ratio: 1:8 or 1:16
  • Number of vCPUs: 16
  • Intel® Xeon® Processor E5 v4 family
  • Basic/Turbo frequency: 3.2 GHz/3.6 GHz
  • High I/O
  • General Purpose SSD
  • Ultra-high I/O
  • Extreme SSD
  • Ultra-high PPS throughput
  • Large memory capacity and more processor cores than other types of ECSs
  • InfiniBand NICs providing a bandwidth of 100 Gbit/s
  • 100 Gbit/s EDR InfiniBand network
  • An ECS with higher specifications has better network performance.
  • Maximum PPS: 900,000
  • Maximum intranet bandwidth: 13 Gbit/s

Ultra-High Performance Computing H2 ECSs

Overview

Ultra-high performance computing ECSs are designed to meet high-end computational needs, such as industrial simulation, molecular modeling, and computational fluid dynamics. In addition to the substantial CPU power, the ultra-high performance computing ECSs offer diverse options for low-latency RDMA networking using EDR InfiniBand NICs to support memory-intensive computational requirements.

Scenarios

High-end computing, such as industrial simulation, molecular modeling, and computational fluid dynamics

Specifications

Table 2 H2 ECS specifications

Flavor

vCPUs

Memory

(GiB)

Max./Assured Bandwidth

(Gbit/s)

Max. PPS

(10,000)

Max. NIC Queues

Local Disks

(GiB)

Virtualization

h2.3xlarge.10

16

128

13/8

90

8

1 × 3,200

KVM

h2.3xlarge.20

16

256

13/8

90

8

1 × 3,200

KVM

Notes on Using H2 ECSs

  • H2 ECSs do not support OS reinstallation or change.
  • H2 ECSs do not support specifications modification.
  • H2 ECSs do not support cold migration, live migration, or high availability (HA).
    • If the host is faulty or subhealthy, you need to stop the ECS for hardware repair.
    • In case of system maintenance or hardware faults, the ECS will be redeployed to another host. The local disk data of the ECS will not be retained.
  • H2 ECSs support the following OSs:
    • For public images:
      • CentOS 7.2 64bit
      • CentOS 6.5 64bit
    • For private images:
      • CentOS 6.5 64bit
      • CentOS 7.2 64bit
      • CentOS 7.3 64bit
      • SUSE Linux Enterprise Server 11 SP4 64bit
      • SUSE Linux Enterprise Server 12 SP2 64bit
      • Red Hat Enterprise Linux 7.2 64bit
      • Red Hat Enterprise Linux 7.3 64bit
  • H2 ECSs use InfiniBand NICs that provide a bandwidth of 100 Gbit/s.
  • Each H2 ECS uses one PCIe 3.2 TB SSD card for temporary local storage.
  • If an H2 ECS is created using a private image, install an InfiniBand NIC driver on the ECS after the ECS creation following the instructions provided by Mellanox. Download the required version (4.2-1.0.0.0) of InfiniBand NIC driver from the official Mellanox website and install the driver by following the instructions provided by Mellanox.
  • For SUSE H2 ECSs, if IP over InfiniBand (IPoIB) is required, you must manually configure an IP address for the InfiniBand NIC after installing the InfiniBand driver.
  • After you delete an H2 ECS, the data stored in SSDs is automatically cleared. Therefore, do not store persistence data into SSDs during ECS running.
  • An H2 ECS is charged even if it is stopped. To stop the ECS from being billed, delete it and its associated resources.