1 post tagged with "intel"

View All Tags

G1 vs G2 Nodes

Nam Pho

Nam Pho

Director for Research Computing

Hello Hyak community, you may have noticed we've been relatively quiet infrastructure-wise over the past few months. Part of this has been due to data center constraints that have limited our ability to grow the cluster, which have since been addressed (for now) in April 2024. The good news is that we expect to begin ramping up deliveries of previously purchased slices over the coming weeks and months and, as a result, expanding the overall size of the cluster and checkpoint partitions.

G1 vs G2#

Large clusters are preferably homogenous to help with resource scheduling as any part of the cluster should be interchangeable with another. However, this is typical for fully built systems at the time of launch and not gradual build outs of condo-like systems such as Hyak. The primary driver behind this change from g1 (generation 1) to g2 are the rapid advances in technology providing a performance gain that make it untenable to maintain an older processor for the sake of homogeneity.

Intel vs AMD and Nodes vs Slices#

The first half of klone from its initial launch until June 11, 2024 when the first g2 nodes were introduced was a fully Intel-based cluster. Specifically, all g1 nodes are based on the Intel XEON 6230 Gold or "Cascade Lake" generation CPUs. The g2 nodes are based on the AMD EPYC 9000 series or "Genoa" generation CPUs.

Nodes refer to a physical unit we procure and install, such as a server. You don't need to worry about this. Slices are what researchers actually buy and contribute to the cluster in their "condo". Click here to learn more about Hyak's condo model.. A slice is a resource limit for your Hyak group backed by a commensurate contribution to the cluster. Sometimes (as in the g1 era) 1 node was 1 slice. However, in the g2 era, 1 node can consist of up to 6 slices to maintain a consistent price point for performance across generations and to provide flexibility to the Hyak team to select more cost-effective configurations.

CPU and Architecture Optimizations#

On a core-per-core basis, a g2 CPU core are faster than a g1 CPU core. If you are using an existing module that was compiled under a g1 (or Intel) slice there's a strong chance it will continue to work on a g2 (or AMD) slice. Intel and AMD are both x86 based CPUs and use an overlapping instruction set. Unless you compile code that is highly optimized for Intel or AMD, your code should run cross-platform. In our spot check of a few current modules (many contributed by our users) there seems to be no issues running existing Intel compiled code on the newer g2 slices. However, if you choose to recompile your code on a g2 (or retroactively for a g1) slice, you may see a performance improvement on that specific architecture at the expense of generalizibility and limiting your resource availability.

Storage and Memory#

There are no special considerations for taking advantage of the local node SSDs on either node type. This is accessible as /src across both g1 and g2 nodes and you can copy your data to and from there during a job. Note that the /src directory is wiped on job exit and not persistent beyond the life of your job.

There are no special considerations for taking advantage of the faster memory on the g2 nodes.

Ada and Hopper GPUs#

There are no special considerations for any pre-Ada (i.e., L40(S)) or pre-Hopper (i.e., H100) GPU code. All NVIDIA-based GPU codes are fully compatible across generations. As these are a newer GPU generation, there are performance improvements by using them alone. However, they are attached to the newer g2 architecture so benefit from the supporting case of improved CPU and memory performance of the surrounding system. If your support (i.e., non-GPU) code relies on any architecture optimizations, see the caveats above.

Resource Requests#

If you purchased any of these slices and contributed them to the cluster you will have received the specific partition names to use once they are deployed.

If you are interested in using these new resources when they are idle via the checkpoint partition there are now new considerations. You can read about it here. The new checkpoint partitions are:

  • ckpt-all if you want access to the entire cluster across g1 and g2 resources (and every possible GPU). One possible concern if you run a multi-node job that spans g1 and g2 nodes, is that you will probably see a performance hit. Multi-node jobs often rely on gather operations and will be as slow as the slowest worker, so your g2 nodes will be held back waiting for computation done on the slower g1 nodes to complete.
  • ckpt if you want to optimize for Intel specific processors (or only want to use the older GPUs). This is the status quo and you shouldn't need to change your job scripts if you don't want to use the newer resources.
  • ckpt-g2 if you want to optimize for AMD specific processors (or use only the newer GPUs).

As always, if you have any questions please reach us at help@uw.edu.