2 posts tagged with "ada"

View All Tags

G1 vs G2 Nodes

Nam Pho

Nam Pho

Director for Research Computing

Hello Hyak community, you may have noticed we've been relatively quiet infrastructure-wise over the past few months. Part of this has been due to data center constraints that have limited our ability to grow the cluster, which have since been addressed (for now) in April 2024. The good news is that we expect to begin ramping up deliveries of previously purchased slices over the coming weeks and months and, as a result, expanding the overall size of the cluster and checkpoint partitions.

G1 vs G2#

Large clusters are preferably homogenous to help with resource scheduling as any part of the cluster should be interchangeable with another. However, this is typical for fully built systems at the time of launch and not gradual build outs of condo-like systems such as Hyak. The primary driver behind this change from g1 (generation 1) to g2 are the rapid advances in technology providing a performance gain that make it untenable to maintain an older processor for the sake of homogeneity.

Intel vs AMD and Nodes vs Slices#

The first half of klone from its initial launch until June 11, 2024 when the first g2 nodes were introduced was a fully Intel-based cluster. Specifically, all g1 nodes are based on the Intel XEON 6230 Gold or "Cascade Lake" generation CPUs. The g2 nodes are based on the AMD EPYC 9000 series or "Genoa" generation CPUs.

Nodes refer to a physical unit we procure and install, such as a server. You don't need to worry about this. Slices are what researchers actually buy and contribute to the cluster in their "condo". Click here to learn more about Hyak's condo model.. A slice is a resource limit for your Hyak group backed by a commensurate contribution to the cluster. Sometimes (as in the g1 era) 1 node was 1 slice. However, in the g2 era, 1 node can consist of up to 6 slices to maintain a consistent price point for performance across generations and to provide flexibility to the Hyak team to select more cost-effective configurations.

CPU and Architecture Optimizations#

On a core-per-core basis, a g2 CPU core are faster than a g1 CPU core. If you are using an existing module that was compiled under a g1 (or Intel) slice there's a strong chance it will continue to work on a g2 (or AMD) slice. Intel and AMD are both x86 based CPUs and use an overlapping instruction set. Unless you compile code that is highly optimized for Intel or AMD, your code should run cross-platform. In our spot check of a few current modules (many contributed by our users) there seems to be no issues running existing Intel compiled code on the newer g2 slices. However, if you choose to recompile your code on a g2 (or retroactively for a g1) slice, you may see a performance improvement on that specific architecture at the expense of generalizibility and limiting your resource availability.

Storage and Memory#

There are no special considerations for taking advantage of the local node SSDs on either node type. This is accessible as /src across both g1 and g2 nodes and you can copy your data to and from there during a job. Note that the /src directory is wiped on job exit and not persistent beyond the life of your job.

There are no special considerations for taking advantage of the faster memory on the g2 nodes.

Ada and Hopper GPUs#

There are no special considerations for any pre-Ada (i.e., L40(S)) or pre-Hopper (i.e., H100) GPU code. All NVIDIA-based GPU codes are fully compatible across generations. As these are a newer GPU generation, there are performance improvements by using them alone. However, they are attached to the newer g2 architecture so benefit from the supporting case of improved CPU and memory performance of the surrounding system. If your support (i.e., non-GPU) code relies on any architecture optimizations, see the caveats above.

Resource Requests#

If you purchased any of these slices and contributed them to the cluster you will have received the specific partition names to use once they are deployed.

If you are interested in using these new resources when they are idle via the checkpoint partition there are now new considerations. You can read about it here. The new checkpoint partitions are:

  • ckpt-all if you want access to the entire cluster across g1 and g2 resources (and every possible GPU). One possible concern if you run a multi-node job that spans g1 and g2 nodes, is that you will probably see a performance hit. Multi-node jobs often rely on gather operations and will be as slow as the slowest worker, so your g2 nodes will be held back waiting for computation done on the slower g1 nodes to complete.
  • ckpt if you want to optimize for Intel specific processors (or only want to use the older GPUs). This is the status quo and you shouldn't need to change your job scripts if you don't want to use the newer resources.
  • ckpt-g2 if you want to optimize for AMD specific processors (or use only the newer GPUs).

As always, if you have any questions please reach us at help@uw.edu.

February 2024 Maintenance Details

Nam Pho

Nam Pho

Director for Research Computing

Hello Hyak community! We have a few notable announcements regarding this month’s maintenance. If the hyak-users mailing list e-mail didn’t fully satisfy your curiosity, hopefully this expanded version will answer any lingering questions.

GPUs#

  • Software: The GPU driver was upgraded to the latest stable version (545.29.06). The latest CUDA 12.3.2 is also now provided as a module. You are also encouraged to explore the use of container (i.e., Apptainer) based workflows, which bundle various versions of CUDA with your software of interest (e.g., PyTorch) over at NGC. NOTE: Be sure to pass the --nv flag to Apptainer when working with GPUs.

  • Hardware: The Hyak team has also begun the early deployments of our first Genoa-Ada GPU nodes. These are cutting-edge NVIDIA L40-based GPUs (code named “Ada”) running on the latest AMD processors (code named “Genoa”) with 64 GPUs released to their groups two weeks ago and an additional 16 GPUs to be released later this week. These new resources are not currently part of the checkpoint partition but we will be releasing guidance on making use of idle resources here over the coming weeks directly to the Hyak user documentation as we receive feedback from these initial researchers.

Storage#

  • Performance Upgrade: In recent weeks, AI/ML workloads have been increasingly stressing the primary storage on klone (i.e., "gscratch"). Part of this was attributed to the run up to the International Conference for Machine Learning (ICML) 2024 full paper deadline on Friday, February 2. However, it also reflects a broader trend in the increasing demands of data-intensive research. The IO profile was so heavy at times that our systems automation throttled the checkpoint capacity to near 0 in order to keep storage performance up and prioritize general cluster navigation and contributed resources. We have an internal tool called iopsaver that automatically reduces IOPS by intelligently requeuing checkpoint jobs generating the highest IOPS while concurrently limiting the number of total active checkpoint jobs until the overall storage is within its operating capacity. At times over the past few weeks you may have noticed that iopsaver had reduced the checkpoint job capacity to near 0 to maintain overall storage usability.

    During today’s maintenance, we have upgraded the memory on existing storage servers so that we could enable Local Read-Only Cache (LROC) although we don’t anticipate it will be live until tomorrow. Once enabled, LROC allows the storage cluster to make use of a previously idle SSD capacity to cache frequently accessed files on this more performant storage tier medium. We expect LROC to make a big difference as during this period of the last several weeks, the majority of the recent IO bottlenecking was attributed to a high volume of read operations. As always, we will continue to monitor developments and adjust our policies and solutions accordingly to benefit the most researchers and users of Hyak.

  • Scrubbed Policy: In the recent past this space has filled up. As a reminder, this is a free-for-all space and a communal resource for when you have data you only need to temporarily burst out into past your usual allocations from your other group affiliations. To ensure greater equity among its use, we have instituted a 10TB and 10M files limit for each user in scrubbed. This impacts <1% of users as only a handful of users were using an amount of quota from scrubbed >10TB.

Questions?#

Hopefully you found these extra details informative. If you have any questions for us, please reach out to the team by emailing help@uw.edu with Hyak somewhere in the subject or body. Thanks!