klone Users Storage Optimizations

Nam Pho

Nam Pho

Director for Research Computing
note

There are steps you, as a researcher using klone, can do to limit the impact of whatever else is happening on the cluster on your individual workflows.

While some of what precipitated this conversation is the current state of the storage (i.e., mmfs1 or gscratch), there are several things you can do as a researcher to both reduce the load on gscratch as well as help insulate your jobs from cluster-wide storage slowdowns.

1. Use local node SSDs.#

Each node on the cluster has a local SSD drive with 350+ GB of space available for use by user jobs. This space is available only to jobs running on that node and all contents are purged when the users’ last job running on the node completes. It is mounted as /scr and /tmp (both paths go to the same place) on all the compute nodes.

If input data, Apptainer (Singularity) images, or other files used by your job will fit, copying those files to the SSD (via cp, rsync, etc.) once at the beginning of your job and reading them from there during the remainder of the job run results in less load on the central storage, helps insulate your job from any instances of central storage slowness, and can often result in better overall job performance.

Slurm has a command called sbcast [www] that is useful for efficiently copying files to all nodes used in a multi-node job as part of an sbatch script.

For files being written that need to be kept after the job run, it is generally best to write these directly to the central storage. Because new files are written directly to the very fast NVMe layer, such writes are less likely to impact overall storage performance. That said, it is still beneficial to write intermediate job files to the local SSD whenever possible.

2. Code for efficient file IO.#

While this can be a very complicated topic, a great deal of overall job performance can be gained by thoughtful and judicious use of file input-output (IO). Some general tips:

  • Keep in mind that file access is orders of magnitude slower than memory access, and processes often have to completely "stop and wait" for disk IO operations to complete. Minimizing file IO operations, especially inside "inner loops" of programs can greatly speed up job completion, and helps to reduce load on the cluster central storage.
  • Fewer, larger file IO operations are generally more efficient than multiple smaller file operations accessing the same data.
  • When possible, store data in an efficient format such as HDF5 instead of many small files.
  • "Open/read once, access many times" if job memory permits.

3. Containerize your environment.#

As mentioned above, minimizing the number of files you need to access can help reduce the number of input / output operations per second (IOPS) happening on the cluster. For example, a Python miniconda environment can create hundreds or even thousands of small files when you install different library dependencies. While Python is a common compute environment, this can be generalized to most other programs you may need. When you containerize your environment, this gets reduced to a single file. A brief introduction to Singularity (now called Apptainer) can be found here. As a side benefit, containerizing your environment–making it a single file–makes it much easier to move it around (see #1 above).

4. Stay under quota.#

Constantly hitting your inode (e.g., file) or block (e.g., number of GBs or TBs) quotas can cause extra storage slowness. If you need a bump on either please reach out to discuss your options. As a reminder you can us the hyakstorage command on klone to display current quota usage for all of your filesets as well as your home directory. Please note that this output is updated once an hour so it will take time to reflect any overages.

5. Report issues.#

While the Hyak team has an extensive monitoring and alerting framework in place to help us to proactively determine when things may be going wrong, not all causes of slow user experience are currently correlated to metrics. Furthermore, our team generally interfaces with the cluster in different ways than our users, so we may not be as equally exposed to any pains until it is reported to us. If you’ve run into a performance issue, please submit a ticket by emailing help@uw.edu. Please provide any symptoms you are observing, along with the date, timeframe, job IDs (if applicable), commands you are running with their full output, etc. If you don’t need or want a reply from us it is still helpful for us to hear from you, feel free to say "no response needed" or something along these lines so we know how to respond.

See also:

Hyak Team Storage Optimizations

Nam Pho

Nam Pho

Director for Research Computing
note

The Hyak team has taken six concrete steps to stabilize and optimize storage on klone over the past few weeks.

While the storage on klone (i.e., mmfs1 or gscratch) may appear to be a monolithic device, it is an extremely complex cluster in its own right. This storage cluster is mounted on every klone node: so despite appearing as "on the node", gscratch physically resides on specialized storage hardware separated from the compute resources of klone. The storage is accessed across a high-speed, ultra low-latency HDR Infiniband network, and is designed to be scalable independent of KLONE’s compute resources.

As mentioned in an earlier blog post today, our incoming hardware expansion will drastically increase the amount of demand the storage cluster can handle. In the meantime, the Hyak team has taken measures to help maintain a usable level of storage performance for users and jobs:

1. Improved internal storage metrics gathering and visibility.#

klone slurm metrics

The Hyak team improved storage-cluster metric gathering and visibility, allowing us to correlate those metrics to reports of poor user experience, and to make data-driven tuning and storage policy decisions.

In the figure above we have visibility into if an abnormally high number of jobs have errors that might suggest underlying storage or other user experience issues.

2. Created custom filesystem migration policies to optimize the use of the NVMe layer.#

The bulk of the storage capacity on klone is stored on rotary hard disk drives totalling approximately 1.7 Petabytes (PB) of raw storage. In addition to the hard disk storage, there is a much smaller, extremely fast–and expensive–pool of NVMe "flash" storage that functions both as a write buffer for new files written to the filesystem, and also as a read-cache-like layer where files can be read without causing load on the rotary disks.

The Hyak team has also optimized the file placement policy: files most likely to generate heavy load reside in the limited space of the NVMe layer, ensuring that no storage load is generated on the hard disk layer when those files are repeatedly accessed.

klone storage policy

In the figure above you can see that the flash tier (green line) is allowed to fill up to 80% capacity due to job writes then the migration policy begins until the flash tier is down to 65% full. For the majority of the past few several weeks we can see things worked as expected. However, there were a few events recently where jobs were producing so much data that the flash tier was able to get to 100% full faster than the storage system could move data off the flash tier. Giving the migration process too high of a priority results in "slowness" in the user experience. We have since been tuning the aggressiveness of this migration process to reduce the likelihood of it occuring again.

3. Added QoS policies to improve worst-case filesystem responsiveness.#

The klone filesystem has a coarse Quality-of-Service (QoS) tuning facility that allows the filesystem to cap the rate of storage operations for various types of storage input-output (IO). The Hyak team has used this facility in two different ways:

  1. First, to limit the storage load impact when the NVMe layer, described above, needs to free up space by moving files to the hard drive layer.

  2. Secondly, to moderate the amount of storage load that can be generated by any single compute node in the cluster. This way, outlier jobs in terms of storage load generation are less likely to have an outsized performance impact on the storage.

4. Manually identifying jobs causing a disproportionate impact on storage performance.#

klone storage metrics

Utilizing metrics and old-fashioned sleuthing, we have been manually tracking down individual jobs that appear to be having a disproportionate and/or unnecessary impact on storage performance, and working with users to address the storage performance impact of these jobs.

In the above figure we can see job IO follows a power law dynamic, a small handful of jobs are often responsible for the majority of load. In this case a single job on a single node is responsible. When users report storage "slowness" this disrepancy can be even more pronounced but we are able to quickly narrow down which specific nodes are responsible and address these corner cases.

5. Dynamically reducing the number of running checkpoint partition jobs.#

As of April 19th, 2022, we have implemented data-driven automation to moderate storage load by dynamically managing the number of running checkpoint (ckpt) partition jobs. When the number of running ckpt jobs is being limited, pending jobs will show AssocGrpJobsLimit as the REASON for not starting.

Please note that non-ckpt jobs (i.e., jobs submitted to nodes your lab contributed to the cluster) are not limited in any way. The social contract when joining the Hyak community is that you get access to the nodes your lab contributes on-demand, and–if and when they are idle–access to other labs’ resources on the cluster. However, access to other labs’ resources isn’t and hasn’t ever been guaranteed: it’s just that there’s often a steady state idle capacity for users to "burst" into by submitting ckpt jobs.

In aggregate, 'Storage Load' is a consumable resource just like CPU cores or memory, albeit one that impacts the whole cluster when it is over-consumed. The Slurm cluster scheduler cannot directly consider storage load availability when evaluating resources for starting ckpt jobs, hence our need to automate. Our new tooling limits the storage performance impact from ckpt jobs in order to improve storage stability for everyone.

klone storage load

The red and blue lines represent two storage servers that we have most closely tied to the user experience and 50% load being the threshold we aim to remain at or under by dynamically reducing the number of running ckpt jobs when it exceeds that limit.

So far, this appears to be very effective at moderating the overall storage load, preventing the storage cluster from becoming unusably slow and avoiding other storage-performance issues. We will continue to tune it in search of the best balance between idle resource utilization via ckpt and storage performance.

6. Expanding the team#

Acknowledging that the storage sub-system is a complicated machine in its own right, it needs much more care and attention and the current Hyak team is stretched incredibly thin as is. We have started the process of hiring a dedicated research data storage systems engineer to focus on optimizing storage going forward.

See also:

An update on klone storage

Nam Pho

Nam Pho

Director for Research Computing
note

klone has experienced exponential growth over the first year of its launch, necessitating long-standing storage ugprades to occur. The current estimate is between June and July 2022 for deployment of this hardware.

The 3rd generation Hyak cluster, klone, launched in spring 2021 with 144 HPC nodes and 192 GPUs. In just a single year, we’ve grown to over 384 HPC nodes (a 166% increase) and 448 GPUs (a 133% increase). klone has more than doubled in size, and while some of this growth comes from long-standing Hyak members migrating to the new cluster, much of our increased capacity comes from hundreds of new researchers joining the Hyak community. We’ve seen existing sponsors such as the College of Engineering increase their already substantial footprints by 60%, we’ve welcomed new sponsors such as UW Bothell, UW Tacoma, and the Puget Sound Institute, and seen over 1000% growth–seriously–in our new self-sponsored tier for investigators and faculty without an existing Hyak sponsor affiliation. As with any large project, during KLONE’s initial planning stages we made assumptions about our growth rate & the types of research we would be supporting: assumptions that have been shattered by our growth over the past year. It was never a question of if we would need to upgrade our support infrastructure–like storage–but when, and our rapid growth significantly accelerated our upgrade timeline.

Monitoring – and developing more monitoring for – the Hyak clusters is a central responsibility of our team. The status quo at the beginning of 2022 was to track down errant jobs or workflows when storage issues came up. In almost every instance, we were able to pinpoint the problematic job and work with the researcher to shape their code into a normal IO profile. Pausing jobs and providing best practices was sufficient to keep the storage performance solid for everyone. However, starting around the last week of March 2022, we started having trouble finding an obvious job, or even a set of jobs, impacting storage performance.

The truth is that our baseline load had shifted. Due to our tremendous growth, things researchers had previously been doing without issue were now causing problems. We also noticed an evolution of the types of research happening on klone. The Hyak community diversified from traditional HPC workflows (e.g., simulations) into more data-intensive areas like data science (e.g., R jobs), deep learning, and artificial intelligence research. We accelerated our discussions with storage vendors: in a few short months, an expansion went from an eventuality to an immediate and pressing need. Still, we tried several last-minute optimizations to see if we could prevent spending all that money. We are serious about our fiduciary duty, as stewards of this research platform, to provide the most value for the Hyak community with the dollars we are entrusted with. We knew a storage upgrade for klone would cost hundreds of thousands of dollars and we needed absolute certainty that we couldn’t engineer a way around that expense.

klone storage policy

The storage on klone (i.e., mmfs1 or gscratch) might pretend to be a mere folder or directory, but in truth it’s an abstraction of a highly complex system. To provide cost-effective, high-performance storage, a small high-speed NVMe "flash" layer acts both as a write buffer for the slower spinning disks–which make up the vast majority of cluster’s capacity–and as a high-speed "cache" for recently & frequently accessed small files. While presented as a single folder to the researcher, behind the scenes the storage cluster moves data between these tiers to balance performance. As seen in the figure above, when the flash layer reaches 80% capacity, a process begins to drain it by moving less frequently used files to the spinning-disk layer until the flash layer reaches 65% capacity. You might also notice that despite our precautions and monitoring, as of April 9, 2022, we were no longer able to migrate data from flash to spinning disks faster than our users were writing. This was the final deciding factor for us, and we initiated our long-standing plan to upgrade the storage for klone.

This necessary investment to upgrade storage will double both the maximum input-output operations-per-second (IOPS) and throughput (storage bandwidth), providing much needed overhead for current workflows as well as accommodating future growth. We are excited for this upgrade – and are doing everything we can to expedite its deployment – but due to the sheer amount of hardware we’re purchasing, we’ve been swept up in the pandemic-induced global supply chain crunch. Our vendors have predicted that the end of July is the worst-case scenario, but that a June delivery is also possible. We will update the Hyak community as we know more. As always, we welcome any questions: if you want to speak with us about something, send an email to the Hyak team via help@uw.edu and we’ll follow up with you.

See also:

OS upgrade for klone

Nam Pho

Nam Pho

Director for Research Computing
note

klone has a new OS, we upgraded to Rocky Linux from CentOS 8.

Background#

In late 2020, while building the current-generation cluster, klone, our previous-generation cluster, MOX, was running CentOS 7 – which was nearing end-of-life support. We used the transition to klone as an opportunity to deploy CentOS 8, the world’s most popular OS in academic research computing environments. Unfortunately, around the time we were wrapping up KLONE’s software stack, the CentOS project announced [1, 2] a transition of their own: Red Hat unilaterally terminated the development of CentOS as an open-source version of Red Hat Enterprise Linux (RHEL). CentOS would become an upstream version of RHEL – in other words, more experimental and ultimately less stable.

Rocky Linux

As the dust from this announcement settled, a consensus emerged: Rocky Linux, led by the initial founder of the CentOS project, Greg Kurtzer, would become the CentOS successor.

The Transition#

Fast-forward to late 2021: after our summer ‘21 launch of klone, and our fall ‘21 cluster capacity expansion, we were finally able to turn our attention to the CentOS to Rocky migration. And just in time, too, because CentOS 8–the operating system we deployed just months earlier–would be officially unsupported after December 21, 2021.

Drake on CentOS and Rocky

Our goal was to make this OS transition as smooth and unnoticeable to our users as possible. After all, this is our mission: we take care of the tech so that you can take care of the science. Rocky, like CentOS, is intended to be a bug-for-bug, open-source version of RHEL, and with its talented, globe-spanning team of developers, we were confident that the impact of this transition would be minimal.

We began the transition with our backend during the December ‘21 maintenance: the klone head node, our Slurm scheduler, was successfully migrated to Rocky 8. So far so good! During our next maintenance, January ‘22, we migrated all the compute node images to Rocky. A handful of users reported code-compiling issues, which we were able to resolve, but otherwise it was uneventful. We took extra care on the final piece of the Rocky migration–the login nodes–due to their accessibility from the wider internet. And, as of today’s maintenance, we are excited and relieved to report that klone is now a 100% Rocky cluster! đŸ„ł

Summary#

The Hyak team was forced to revisit a major OS migration, mere months after the initial launch of klone. This is highly unusual–and no small feat–but we have prevailed. We deployed a widely-supported, open-source OS with enterprise-level stability, while remaining cost-effective to the research community at the University of Washington. With this work behind us, we’ve arrived at a sustainable platform for the life of the klone cluster. We’re excited for the future of klone, and excited to redirect our time back to feature development.

We want to give a huge thank-you to our users for their patience during this migration period. Spoiler alert: Rocky won and it’s a good thing!

Rocky Wins!

Fairshare improvements on klone

Nam Pho

Nam Pho

Director for Research Computing
note

We have adjusted legacy fairshare-related settings to account for GPUs and large memory contributions and usage in order to help more fairly allocate checkpoint resources.

History#

In fall 2019 (almost two years ago to the day) the Hyak team received our first Turing generation GPU node. Hyak has had a modest GPU footprint in the past as far back as a decade ago with the first generation cluster (called "IKT") and its pre-Pascal generation cards. In 2015 we acquired a smaller test bed of Pascal generation GPUs for the second generation cluster (called "MOX"). There were never more than a dozen GPUs in either the IKT or MOX clusters, but the introduction of Turing GPUs marked a resurgence of interest in these accelerators among the UW research community. In the last two years, we've substantially expanded our capabilities to over 300 GPUs.

Background#

Hyak clusters work on a "condo" model: labs are able to utilize their contributed hardware on-demand as well as take advantage of idle capacity from other groups' hardware via the checkpoint (ckpt) partition. Your checkpoint priority — or "fairshare" in Slurm scheduler parlance — is weighted such that your fairshare is directly proportional to your lab’s contribution to the cluster. In the MOX days, GPU users tended to stay within their contributed hardware partitions and rarely made use of checkpoint. We attributed this to a mental shift: students were used to using a single resource, like a desktop computer, rather than a shared cluster of computing resources. However, with the migration to the third generation Hyak cluster (called "klone") and its new QoS scheduling system and the increasing comfort of students using a shared platform, GPU utilization in the checkpoint partition has increased as well. This is a good thing: we want groups to benefit from their Hyak membership in the cluster and take advantage of idle cluster resources beyond their initial hardware contributions. This is a primary tenet of our social contract with the Hyak community: as a node contributor to the cluster, you have access to idle resources of the whole cluster.

Problem#

Fairshare was simpler to calculate in the pre-GPU days because our infrastructure was homogenous: one node contributed to the cluster equaled one fairshare unit. During the last two years of exponential GPU adoption on Hyak, the fairshare calculation has not evolved: 1 HPC node was the same as 1 GPU node at 1 fairshare unit. This didn’t hold because a GPU node can cost between 4 to 8 times (or more) than a traditional HPC node. The result was that labs with GPU or other speciality (e.g., high-memory) nodes tended to have smaller fairshares compared to groups with the same dollar investment but only in traditional CPU nodes. In practice, this meant these GPU users often directly competed for resources with non-GPU jobs in the checkpoint partition on a non-level playing field.

Solution#

Taking into consideration all of this information, as well as the fact that you can request as little as 1 GPU or 1 CPU from the scheduler, we have adjusted the fairshare calculations as follows:

  • Financially: 1 GPU card is roughly equivalent to 40 CPU cores (on a dollar basis), therefore the cost normalization is 40:1 in favor of GPUs.
  • Scarcity: 1 server typically holds 8 GPU cards or 40 CPU cores, therefore the scarcity normalization is 5:1 in favor of GPUs.
  • Combining the financial and scarcity considerations in the points above, the final weighting is 200:1 in favor of GPUs. In other words, 1 GPU card is worth 200 times more than a single CPU core in the eyes of the scheduler and factored into your checkpoint fairshare. Please note that this example only applies to the higher GPU memory cards (i.e., gpu-rtx6k) while less expensive GPUs have commensurately less weight.

Summary#

With the October monthly maintenance today we have introduced a new fairshare weighting system on the klone cluster's checkpoint (ckpt) partition that commensurately acknowledges GPU labs for their contributions to the Hyak community. This has no impact on jobs submitted to non-ckpt partitions.

Migrating from mox to klone

Nam Pho

Nam Pho

Director for Research Computing

If you were previously a proficient mox user and now find yourself on klone, what's new / different? This is a high-level summary, please consult the documentation [link] for more details.

note

Updated August 10, 2021 to include additional information specific for GPU users.

Login#

  • Logging in was previously to mox.hyak.uw.edu now it's klone.hyak.uw.edu.
  • As a reminder login nodes are only to connect to the cluster, navigate the cluster file system, and submit jobs. This applies to both klone and mox. Do not compile codes on the login node or run any programs that require significant compute (get a session with Slurm).

Data Transfer#

  • Only use the login node to transfer data on klone. On mox you'd have used a build node or could have used the login node if it wasn't very computationally heavy.

Storage#

  • The path to lab storage is still /gscratch/mylab on both klone and mox. You'll need to copy over the data from mox to klone you want to continue using.
  • Home directories are still 10GB per user, same on both clusters.
  • Scrubbed exists on klone just as it did on mox at /gscratch/scrubbed this is a free-for-all space on both clusters where files are automatically deleted after 21 days.
  • Some new benefits of the klone storage compared to mox:
    • There are snapshots for gscratch! Look inside the /gscratch/mylab/.snapshots folder for a copy of your lab folder once an hour, every hour, for 24 hours. This is not a backup copy nor a replacement for version management (e.g., git) but useful for retrieving recent versions or something accidentally deleted. This is currently disabled.
    • More storage! Previously you received 500GB or 0.5TB of gscratch quota per node (or pair of GPUs) contributed to mox. Now on klone we've doubled your associated storage quota! For example, 2 nodes on mox would mean 1TB of gscratch but 2 nodes on klone now means 2TB of gscratch. If you had an 8 x GPU node on mox you would have received 2TB of gscratch but an 8 x GPU node on klone now means 4TB of gscratch.
    • It's faster! We've had reports of performance that's averaging a 30% speed up all else being equal, nothing you need to do aside from use klone instead of mox.
    • It's faster than fast! While klone storage is faster than mox storage overall, gscratch on klone is further turbo charged with a NVMe flash based tier. NVMe flash is among the fastest storage mediums you can get and further differentiating benefit if you use gscratch vs scrubbed on klone.

Compute#

  1. When submitting a Slurm job, whether interactive (i.e., salloc) or batch (i.e., sbatch) you'll want to first decide which account to use. This is the group you're part of. You can run the command groups to see your affiliated accounts and run hyakalloc to see all the resources (e.g., compute cores, memory, GPUs) used and available associated with each affiliated account.
  2. Then decide if you want to run this job to count under your resource allocation by submitting to the compute partition (i.e., -p compute) or if you want this job to use idle resources from other groups across the cluster using the checkpoint partition (i.e., -p ckpt).
  • Non-standard partitions. Run sinfo to see the list of all possible partitions, this is only if your group contributed non-standard nodes (e.g., high memory, GPUs) and need to idenitify the appropriate partition names to get immediate use. Otherwise, you'd only be able to get them in a checkpoint capacity. For GPU users this is currently either the gpu-2080ti or the gpu-rtx6k partitions for 11GB and 24GB of GPU memory cards, respectively.
  • There is no build node on klone. Get an interactive session (e.g., salloc) under an existing account and partition combination you have access to.
  • All nodes have internet now on klone. Do all data transfers to and from klone on the klone login nodes, the login nodes on klone have dual 40 Gbps uplinks to the internet. While the compute nodes on klone have internet routing now, they are bottlenecked at 1 Gbps so not suitable for big data transfers.

Software#

  • Singularity containers work the same on both clusters, we encourage this when possible. Refer to our container documentation [link].
  • Modules is updated to the latest versions of the most core parts that the Hyak team maintains (e.g., gcc, Intel, Matlab). Refresh yourself about modules [link].
  • If neither Singularity nor existing modules works for you, you may have to re-compile your codes on klone. "contrib" modules works different now on klone vs mox, please check out the details [link].

klone Soft Launch

Nam Pho

Nam Pho

Director for Research Computing

February 25, 2021#

The UW research computing team celebrates the soft launch of project klone, the 3rd generation Hyak supercomputer. Welcome to those researchers invited to participate in the early access program đŸ„ł 🎉

caution

There will be weekly maintenance days on Tuesday during the soft launch period after which we will move back to our regular cadence of monthly maintenance windows.

The user documentation [link] has been updated to reflect the changes and new features of klone but this will be an ongoing process.

Compute#

  • Soft launch with 1,920 compute cores over 48 nodes:
    • 28 x mem1 nodes (192GB of memory each) in the compute partition,
    • 4 x mem2 nodes (384GB of memory each) in the compute-bigmem partition,
    • 16 x mem3 nodes (768GB of memory each) in the compute-hugemem partition.
  • build nodes no longer exist on klone as they did on mox. All instances have the potential to be interactive and all have internet routing by default (even non-interactive jobs).

Storage#

  • gscratch on klone is 1.4PB total capacity with a new 500TB NVMe flash tier. Data tiering happens automagically, if you use a file frequently it will be moved to the faster storage.
  • Storage quota is still charged back at the same rate ($10 / TB / month). Researchers receive 1TB per node purchased and contributed to klone.

Data#

  • gscratch is not backed up that is the responsibility of the researcher (e.g., LOLO, the cloud, external hard drive). Feel free to email us if you have any questions.
  • While all nodes have internet access now, transfer data using the login nodes. Login nodes have full 2 x 40 Gbps bandwidth. If you transfer using a compute node interactive session you are limited to 1 x 1 Gbps connection.

Software#

  • modules works the same as it did on mox. This is an improved implementation called LMOD on klone compared to environment modules on mox.
  • We provide the basic compilers (e.g., GNU, Intel) as modules.
  • The Hyak team is encouraging a container first world (i.e., use Singularity).

March 3, 2021#

The updated total is 3,840 cores and 96 nodes on klone.

Compute#

  • Compute has doubled by adding another rack to klone, an additional 1,920 compute cores over 48 nodes:
    • 44 x mem1 nodes (192GB of memory each) in the compute partition,
    • 2 x mem2 nodes (384GB of memory each) in the compute-bigmem partition,
    • 2 x mem3 nodes (768GB of memory each) in the compute-hugemem partition.

Software#

  • We created a module for cmake.

March 5, 2021#

Storage#

  • Implemented usage_report.txt files in the base folder of /gscratch/yourlab/ that is updated once an hour to reflect both your block quota and inode capacity usage. This is similar to the gscratch experience on the MOX cluster.

Website#

March 9, 2021#

Storage#

  • Snapshots are here! We are piloting once an hour for 24 hours for every lab storage folder under /gscratch/. Check out the updated documentation here on how to access past snapshots.

Software#

  • We created more LMOD software modules:
    • Matlab R2020b [docs]
    • OpenMPI-4.1.0

March 12, 2021#

  • LMOD software modules:
    • Intel has bundled their software suite (e.g., compiler, MPI) as oneCLI and we created this module (i.e., module load intel/oneCLI).
    • There is now a "contrib" framework for groups to store their shared codes separately from their /gscratch/labname/ data. You can get 100GB of storage to compile codes at /sw/contrib/labname-src/ and then put your LMOD module file in /sw/contrib/modulefiles/labname/. Your module would appear when anyone runs module avail. This is created upon request so if you'd like to opt-in your group please let us know.

April 13, 2021#

Things have been going steady the past week and changes are coming less frequently. We are now increasing time between maintenance periods on klone from weekly on Tuesdays to monthly and aligning it with the mox maintenance as the 2nd Tuesday of every month.

That wraps up our klone soft launch blog updates here, other updates will appear on our Hyak users mailing list. Don't forget to subscribe, instructions on this page at the bottom.

Pytorch and CUDA11

Nam Pho

Nam Pho

Director for Research Computing
info

During the January 12, 2021 mox maintenance period long overdue package updates will be applied. The most user impactful upgrade is the GPU driver from to 418.40.04 to 460.27.04 that will allow for CUDA11 support (up from CUDA10).

The single biggest research use for GPUs on Hyak is for machine learning and artificial intelligence and the community has been clammoring for CUDA11 support for some time. Unfortunately, it's not easy to separate the GPU driver from the node images so it had to wait until the next maintenance window and some testing for non-ML GPU workflows on Hyak like our gromacs users in the molecular dynamics community.

tl;dr your existing Pytorch codes should work but if you wanted to use the new features in Pytorch that required CUDA11 you can upgrade Pytorch and it will work.

Installing Pytorch with CUDA11#

Since this is now the latest and greatest on Hyak I've taken the opportunity to update the Python documentation on how to install Pytorch with CUDA11 support within a miniconda3 environment, check out the step-by-step here.

Reverse compatibility with CUDA10#

Before the January 12, 2021 cluster maintenance every GPU on Hyak had a driver with CUDA10 and all of your codes were previously compiled against it. To test that the GPU driver update to CUDA11 wouldn't impact the most popular machine learning libraries we are compiling Pytorch against our pre-maintenance CUDA10 and testing it against a GPU with the newer CUDA11 installed.

conda create -p /gscratch/scrubbed/npho/pytorch-cuda10 python=3.8 -y

Activate your new pytorch-cuda10 environment:

conda activate pytorch-cuda10

The Pytorch website [www] has a nice getting started matrix that generates the requisite install commands against CUDA10.

pytorch-cuda10

The command shown above to copy-and-paste below:

pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

Now we can load the Python interpreter and confirm Pytorch is installed and the CUDA10 compiled library recognizes this GPU with CUDA11 [www].

(pytorch-cuda10) $ python3 Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.7.1+cu101'
>>> torch.cuda.is_available()
True
>>>

Success!

Previously compiled libraries against CUDA10 from pre-January 12, 2021 maintenance times should still work on the GPUs now with CUDA11. However, if you want to use the full features of libraries that take advantage of newer capabilities in CUDA11 then you should definitely upgrade your libraries.

gromacs on GPUs

Nam Pho

Nam Pho

Director for Research Computing
info

During the January 12, 2021 mox maintenance period long overdue package updates will be applied. The most user impactful upgrade is the GPU driver from to 418.40.04 to 460.27.04 that will allow for CUDA 11 support (up from CUDA 10).

The second most widely used GPU-enabled workflow on HYAK (besides machine learning) is molecular dynamics (MD) so we wanted to test one of the most popular MD codes, gromacs [source], and ensure this driver upgrade wouldn't negatively impact our researchers. I couldn't find gromacs compiled with GPU support currently in our module collection so I used it as an opportunity to create one for you all, read on!

warning

This is an excercise to demonstrate the support for molecular dynamics on GPUs as a proof-of-concept. Scientific verification of the software compile options (e.g., single-precision) and its results is the responsibility of the researcher.

Using gromacs#

I'll start with the end result for those of you who just want to use it but following that I'll dive into the nuts and bolts of how we created the module so you can perform additional optimizations.

This is a GPU-enabled version of gromacs so we need a GPU first (can verify with nvidia-smi).

salloc -A uwit -p ckpt --time=4:00:00 -n 4 --mem=20G --gpus=1

gromacs-2020.4 module#

Once we have a GPU we use modules to load gromacs-2020.4 and all its required dependencies (e.g., CUDA11).

module load gromacs/2020.4-cuda11.1

All packages are sub-commands of the gmx binary so you can verify the module.

$ gmx -version
:-) GROMACS - gmx, 2020.4 (-:
GROMACS version: 2020.4
Verified release checksum is 79c2857291b034542c26e90512b92fd4b184a1c9d6fa59c55f2e24ccf14e7281
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: AVX_512
FFT library: fftw-3.3.3-sse2
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: hwloc-1.11.8
Tracing support: disabled
C compiler: /sw/gcc/10.1.0/bin/gcc GNU 10.1.0
C compiler flags: -mavx512f -mfma -fexcess-precision=fast -funroll-all-loops -O3 -DNDEBUG
C++ compiler: /sw/gcc/10.1.0/bin/g++ GNU 10.1.0
C++ compiler flags: -mavx512f -mfma -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA compiler: /sw/cuda/11.1.1-1/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2020 NVIDIA Corporation;Built on Mon_Oct_12_20:09:46_PDT_2020;Cuda compilation tools, release 11.1, V11.1.105;Build cuda_11.1.TC455_06.29190527_0
CUDA compiler flags:-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-Wno-deprecated-gpu-targets;-gencode;arch=compute_35,code=compute_35;-gencode;arch=compute_50,code=compute_50;-gencode;arch=compute_52,code=compute_52;-gencode;arch=compute_60,code=compute_60;-gencode;arch=compute_61,code=compute_61;-gencode;arch=compute_70,code=compute_70;-gencode;arch=compute_75,code=compute_75;-gencode;arch=compute_80,code=compute_80;-use_fast_math;;-mavx512f -mfma -fexcess-precision=fast -funroll-all-loops -fopenmp -O3 -DNDEBUG
CUDA driver: 11.20
CUDA runtime: 11.10

Test simulation of Lysozyme#

I used a tutorial from the gromacs website here to show it runs processes on GPU(s). The tutorial runs an MD simulation on a lysozyme but that's the extent of my study there. The commands below are a summary of the tutorial with a note that the genbox subcommand is now replaced by solvate.

gmx pdb2gmx -f 1LYD.pdb -water tip3p
gmx editconf -f conf.gro -bt dodecahedron -d 0.5 -o box.gro
gmx solvate -cp box.gro -cs spc216.gro -p topol.top -o solvated.gro
gmx trjconv -s solvated.gro -f solvated.gro -o solvated.pdb
gmx grompp -f em.mdp -p topol.top -c solvated.gro -o em.tpr -maxwarn 3

The final gromacs command below starts the fun, the documentation suggests it will automatically identify the GPUs available to send work to them. However, there are more explicit GPU arguments we encourage you to explore.

gmx mdrun -v -deffnm em

You can ssh into the node you're using in a separate window to have a parallel nvidia-smi command run so we can monitor the load on the GPU(s).

+-------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|===================================================================|
| 0 N/A N/A 143353 C gmx 165MiB |
| 1 N/A N/A 143353 C gmx 165MiB |
| 2 N/A N/A 143353 C gmx 167MiB |
| 3 N/A N/A 143353 C gmx 167MiB |
| 4 N/A N/A 143353 C gmx 167MiB |
| 5 N/A N/A 143353 C gmx 167MiB |
| 6 N/A N/A 143353 C gmx 167MiB |
| 7 N/A N/A 143353 C gmx 165MiB |
+-------------------------------------------------------------------+

We can see a process occuping each GPU so it works! At least, gromacs uses GPUs...the GPUs themselves weren't stressed heavily and that requires the user to increase the number of rank processes and match that with available GPUs. You can do this by adding arguments to the gmx mdrun command but by default it did 2 ranks per GPU it detected, which is not a lot.

(Optional) Compile Notes#

You need CUDA11, GNU Compiler, and OpenBLAS library for the version I put together but I was focused on a proof-of-concept and not squeezing out every last drop of performance. There's a lot of further optimization to be done and that's left as an exercise for the reader:

  1. Try the Intel compiler and see if it provides further optimization for non-GPU parts of the workflow.
  2. Try other math libraries (e.g., MKL) and see if it speeds things up.
  3. Add in MPI support if you want to use multiple GPUs across multiple nodes.
  4. Add in modules (e.g., PLUMED).
  5. Other stuff I can't think of with compile flags [here].

Download Source#

From the login node I staged a folder in the modules directory.

cd /sw/gromacs/2020.4-cuda11.1

Grab regression tests.

wget http://gerrit.gromacs.org/download/regressiontests-2020.4.tar.gz

Download gromacs-2020.4 [source].

wget ftp://ftp.gromacs.org/pub/gromacs/gromacs-2020.4.tar.gz

Get a GPU and Code#

I used the shared build-gpu node for an interactive session but if you are affiliated with a group that has their own you can use that instead.

salloc -A uwit -p ckpt --time=4:00:00 -n 4 --mem=20G --gpus=1

Once you get a session with GPU (you can run nvidia-smi to confirm you see one). Extract regression tests.

tar xvzf regressiontests-2020.4.tar.gz

Do the same for the gromacs code and enter the directory.

tar xzvf gromacs-2020.4.tar.gz
cd gromacs-2020.4

Pre-requisite Modules#

Modules loaded individually for readability but you could load all modules in one command. Get a refresher on modules here.

module load cmake/3.11.2
module load gcc/10.1.0
module load cuda/11.1.1-1
module load contrib/openblas/0.2.20

Compile#

I created a subdirectory within the source to compile.

mkdir cuda11
cd cuda11

Use cmake to create the Makefile. Note: if you copy-and-paste the cmake command below you will have to modify the paths referenced for your environment.

cmake .. -DGMX_BUILD_OWN_FFTW=OFF -DREGRESSIONTEST_DOWNLOAD=OFF -DGMX_GPU=ON -DGMX_MPI=OFF -DCMAKE_INSTALL_PREFIX=/sw/gromacs/2020.4-cuda11.1 -DREGRESSIONTEST_PATH=/sw/gromacs/2020.4-cuda11.1/regressiontests-2020.4 -DCUDA_TOOLKIT_ROOT_DIR=/sw/cuda/11.1.1-1

With the Makefile ready you can run make -j 4 and replace 4 with however many cores you have in your session then make install. I created the module file separately so you can load it with module load gromacs/2020.4-cuda11.1 and run the single gmx binary.

Hello world!

Nam Pho

Nam Pho

Director for Research Computing

tl;dr (1) decomissioned a cluster, (2) got a bunch of GPUs for maching learning, (3) launched a cluster, and (4) new and improved documentation.

2020 has definitely been an eventful year but here on Team Hyak we've been trying to make the best of a bad situation (lemons out of lemonade and such). This year saw the decomissioning of the 1st generation Hyak cluster, ikt, and the soft launch of our 3rd generation Hyak cluster, klone. Our partnership with the Allen School and other departments across campus has enabled an explosion in on-campus GPU capacity for the current 2nd generation Hyak cluster, mox. This is all very exciting, machine learning is only going to get bigger. We realize whether you do your research on your laptop, Hyak, or the cloud that at the end of the day it's all just a computer and what matters is what you can actually do with it. Therefore, we are placing more emphasis on new and improved documentation (this website) and will be doing more regular research tutorials on Hyak throughout the coming year.

We hope you have weathered the adversity 2020 brought upon everyone. It has been a tough year for sure, but may your 2021 be brighter and have improvements in store. The Hyak Team has lots of efforts in the works to benefit supporting your research and they will hit full stride in the coming year. This is one improvement we can all look forward to in 2021.