This lesson is in the early stages of development (Alpha version)

Introduction to High-Performance Computing

Why use a Cluster?

Overview

Teaching: 15 min
Exercises: 5 min
Questions
  • Why would I be interested in High Performance Computing (HPC)?

  • What can I expect to learn from this course?

Objectives
  • Describe what an HPC system is

  • Identify how an HPC system could benefit you.

Frequently, research problems that use computing can outgrow the capabilities of the desktop or laptop computer where they started:

In all these cases, access to more (and larger) computers is needed. Those computers should be usable at the same time, solving many researchers’ problems in parallel.

Jargon Busting Presentation

Open the HPC Jargon Buster in a new tab. To present the content, press C to open a clone in a separate window, then press P to toggle presentation mode.

I’ve Never Used a Server, Have I?

Take a minute and think about which of your daily interactions with a computer may require a remote server or even cluster to provide you with results.

Some Ideas

  • Checking email: your computer (possibly in your pocket) contacts a remote machine, authenticates, and downloads a list of new messages; it also uploads changes to message status, such as whether you read, marked as junk, or deleted the message. Since yours is not the only account, the mail server is probably one of many in a data center.
  • Searching for a phrase online involves comparing your search term against a massive database of all known sites, looking for matches. This “query” operation can be straightforward, but building that database is a monumental task! Servers are involved at every step.
  • Searching for directions on a mapping website involves connecting your (A) starting and (B) end points by traversing a graph in search of the “shortest” path by distance, time, expense, or another metric. Converting a map into the right form is relatively simple, but calculating all the possible routes between A and B is expensive.

Checking email could be serial: your machine connects to one server and exchanges data. Searching by querying the database for your search term (or endpoints) could also be serial, in that one machine receives your query and returns the result. However, assembling and storing the full database is far beyond the capability of any one machine. Therefore, these functions are served in parallel by a large, “hyperscale” collection of servers working together.

Key Points

  • High Performance Computing (HPC) typically involves connecting to very large computing systems elsewhere in the world.

  • These other systems can be used to do work that would either be impossible or much slower on smaller systems.

  • HPC resources are shared by multiple users.

  • The standard method of interacting with such systems is via a command line interface.


Connecting to a remote HPC system

Overview

Teaching: 5 min
Exercises: 0 min
Questions
  • How do I log in to a remote HPC system?

Objectives
  • Access Open OnDemand.

  • Connect to a remote HPC system.

Secure Connections

The first step in using a cluster is to establish a connection from our laptop to the cluster. When we are sitting at a computer (or standing, or holding it in our hands or on our wrists), we have come to expect a visual display with icons, widgets, and perhaps some windows or applications: a graphical user interface, or GUI. Since computer clusters are remote resources that we connect to over slow or intermittent interfaces (WiFi and VPNs especially), it is more practical to use a command-line interface, or CLI, to send commands as plain-text. If a command returns output, it is printed as plain text as well. The commands we run today will not open a window to show graphical results.

If you have ever opened the Windows Command Prompt or macOS Terminal, you have seen a CLI. If you have already taken The Carpentries’ courses on the UNIX Shell or Version Control, you have used the CLI on your local machine extensively. The only leap to be made here is to open a CLI on a remote machine, while taking some precautions so that other folks on the network can’t see (or change) the commands you’re running or the results the remote machine sends back. We will use the Secure SHell protocol (or SSH) to open an encrypted network connection between two machines, allowing you to send & receive text and data without having to worry about prying eyes.

/hpc-intro/Connect%20to%20cluster

SSH clients are usually command-line tools, where you provide the remote machine address as the only required argument. If your username on the remote system differs from what you use locally, you must provide that as well. If your SSH client has a graphical front-end, such as PuTTY or MobaXterm, you will set these arguments before clicking “connect.” From the terminal, you’ll write something like ssh userName@hostname, where the argument is just like an email address: the “@” symbol is used to separate the personal ID from the address of the remote machine.

Connecting with OnDemand

Instead of using a shell application installed on your computer, you can shortcut much of this process using a shell in your browser that directly connects to the cluster.

To do so, login to the cluster’s Open OnDemand interface at this link: https://ondemand.hpcc.msu.edu. If you are prompted, choose Michigan State University as the Identity Provider, and login using your site credentials.

Here, you will find many different interactive applications and ways access the cluster. It is possible that these may be the only ways you work with the cluster in the future! But for now, we will make sure that you can get started using the command line since it is traditionally the most common and flexible access method.

To open a command line connected to the cluster from OnDemand, choose any entry from the Development Nodes tab. For the remainder of this lesson, we will use dev-amd20.

Looking Around Your Remote Home

Very often, many users are tempted to think of a high-performance computing installation as one giant, magical machine. Sometimes, people will assume that the computer they’ve logged onto is the entire computing cluster. So what’s really happening? What computer have we logged on to? The name of the current computer we are logged onto can be checked with the hostname command. (You may also notice that the current hostname is also part of our prompt!)

[netid@dev-amd20 ~]$ hostname
dev-amd20

So, we’re definitely on the remote machine. Next, let’s find out where we are by running pwd to print the working directory.

[netid@dev-amd20 ~]$ pwd
/mnt/home/netid

Great, we know where we are! Let’s see what’s in our current directory:

[netid@dev-amd20 ~]$ ls
id_ed25519.pub

The system administrators may have configured your home directory with some helpful files, folders, and links (shortcuts) to space reserved for you on other filesystems. If they did not, your home directory may appear empty. To double-check, include hidden files in your directory listing:

[netid@dev-amd20 ~]$ ls -a
  .            .bashrc           id_ed25519.pub
  ..           .ssh

In the first column, . is a reference to the current directory and .. a reference to its parent (/mnt/home). You may or may not see the other files, or files like them: .bashrc is a shell configuration file, which you can edit with your preferences; and .ssh is a directory storing SSH keys and a record of authorized connections.

Key Points

  • An HPC system is a set of networked machines.

  • HPC systems typically provide login nodes and a set of worker nodes.

  • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).

  • Files saved on one node are available on all nodes.


Introduction to the Command Line

Overview

Teaching: 95 min
Exercises: 0 min
Questions
  • What is a command shell and why would I use one?

  • How can I move around on my computer?

  • How can I see what files and directories I have?

  • How can I specify the location of a file or directory on my computer?

  • How can I create, copy, and delete files and directories?

  • How can I edit files?

Objectives
  • Explain how the shell relates to the keyboard, the screen, the operating system, and users’ programs.

  • Explain when and why command-line interfaces should be used instead of graphical interfaces.

  • Explain the similarities and differences between a file and a directory.

  • Translate an absolute path into a relative path and vice versa.

  • Construct absolute and relative paths that identify specific files and directories.

  • Use options and arguments to change the behaviour of a shell command.

  • Demonstrate the use of tab completion and explain its advantages.

  • Create a directory hierarchy that matches a given diagram.

  • Create files in that hierarchy using an editor or by copying and renaming existing files.

  • Delete, copy and move specified files and/or directories.

Software Carpentry lessons for shell basics

The “command line” or “shell” is an important tool for HPC work. To learn the basics, we will go through these episodes from the Software Carpentry Shell Novice lesson.

Key Points

  • A shell is a program whose primary purpose is to read commands and run other programs.

  • This lesson uses Bash, the default shell in many implementations of Unix.

  • Programs can be run in Bash by entering commands at the command-line prompt.

  • The shell’s main advantages are its high action-to-keystroke ratio, its support for automating repetitive tasks, and its capacity to access networked machines.

  • A significant challenge when using the shell can be knowing what commands need to be run and how to run them.

  • The file system is responsible for managing information on the disk.

  • Information is stored in files, which are stored in directories (folders).

  • Directories can also store other directories, which then form a directory tree.

  • pwd prints the user’s current working directory.

  • ls [path] prints a listing of a specific file or directory; ls on its own lists the current working directory.

  • cd [path] changes the current working directory.

  • Most commands take options that begin with a single -.

  • Directory names in a path are separated with / on Unix, but \ on Windows.

  • / on its own is the root directory of the whole file system.

  • An absolute path specifies a location from the root of the file system.

  • A relative path specifies a location starting from the current location.

  • . on its own means ‘the current directory’; .. means ‘the directory above the current one’.

  • cp [old] [new] copies a file.

  • mkdir [path] creates a new directory.

  • mv [old] [new] moves (renames) a file or directory.

  • rm [path] removes (deletes) a file.

  • * matches zero or more characters in a filename, so *.txt matches all files ending in .txt.

  • ? matches any single character in a filename, so ?.txt matches a.txt but not any.txt.

  • Use of the Control key may be described in many ways, including Ctrl-X, Control-X, and ^X.

  • The shell does not have a trash bin: once something is deleted, it’s really gone.

  • Most files’ names are something.extension. The extension isn’t required, and doesn’t guarantee anything, but is normally used to indicate the type of data in the file.

  • Depending on the type of work you do, you may need a more powerful text editor than Nano.


Exploring Remote Resources

Overview

Teaching: 25 min
Exercises: 10 min
Questions
  • How does my local computer compare to the remote systems?

  • How does the login node compare to the compute nodes?

  • Are all compute nodes alike?

Objectives
  • Survey system resources using nproc, free, and the queuing system

  • Compare & contrast resources on the local machine, login node, and worker nodes

  • Learn about the various filesystems on the cluster using df

  • Find out who else is logged in

  • Assess the number of idle and occupied nodes

Look Around the Remote System

If you have not already connected to ICER HPCC, please do so now. Take a look at your home directory on the remote system:

[netid@dev-amd20 ~]$ ls

What’s different between your machine and the remote?

Open a second terminal window on your local computer and run the ls command (without logging in to ICER HPCC). What differences do you see?

Solution

You would likely see something more like this:

[user@laptop ~]$ ls
Applications Documents    Library      Music        Public
Desktop      Downloads    Movies       Pictures

The remote computer’s home directory shares almost nothing in common with the local computer: they are completely separate systems!

Most high-performance computing systems run the Linux operating system, which is built around the UNIX Filesystem Hierarchy Standard. Instead of having a separate root for each hard drive or storage medium, all files and devices are anchored to the “root” directory, which is /:

[netid@dev-amd20 ~]$ ls /
bin   etc   lib64  proc  sbin     sys  var
boot  mnt    root  scratch  tmp  working
dev   lib   opt    run   srv      usr

The “/mnt/home” subdirectory is the one where we generally want to keep all of our files. Other folders on a UNIX OS contain system files and change as you install new software or upgrade your OS.

Using HPC filesystems

On HPC systems, you have a number of places where you can store your files. These differ in both the amount of space allocated and whether or not they are backed up.

  • Home – often a network filesystem, data stored here is available throughout the HPC system, and often backed up periodically. Files stored here are typically slower to access, the data is actually stored on another computer and is being transmitted and made available over the network!
    • Access using /mnt/home/netid
  • Scratch – typically faster than the networked Home directory, but not usually backed up, and should not be used for long term storage.
    • Access using /mnt/gs21/scratch/netid
  • Research – similar to Home, but useful for collaboration. Multiple users of a single group have access, and the space is managed by a PI.
    • Access using /mnt/research/group

Quotas

Each of file space is subject to limitations to ensure that all resources are shared appropriately:

  • Home: 50GB and 1 million files.
  • Research: 50GB and 1 million files
  • Scratch: 50TB and 1 million files. FILES UNMODIFIED AFTER 45 DAYS WILL BE DELETED.

To check your usage, use the quota command:

[netid@dev-amd20 ~]$ quota
home directory: Space    Space   Space     Space     Files     Files     Files     Files        
                 Quota    Used    Available % Used    Quota     Used      Available % Used
-----------------------------------------------------------------------------------------------
/mnt/home/netid  50G      32G     18G       64%       1048576   432525    616051    59%

Nodes

Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the login node, head node, landing pad, or submit node. A login node serves as an access point to the cluster.

As a gateway, the login node should not be used for time-consuming or resource-intensive tasks. You should be alert to this, and check with your site’s operators or documentation for details of what is and isn’t allowed. It is well suited for uploading and downloading files, setting up software, and running tests. Generally speaking, in these lessons, we will avoid running jobs on the login node.

Who else is logged in to the login node?

[netid@dev-amd20 ~]$ who

This may show only your user ID, but there are likely several other people (including fellow learners) connected right now.

Dedicated Transfer Nodes

If you want to transfer larger amounts of data to or from the cluster, some systems offer dedicated nodes for data transfers only. The motivation for this lies in the fact that larger data transfers should not obstruct operation of the login node for anybody else. Check with your cluster’s documentation or its support team if such a transfer node is available. As a rule of thumb, consider all transfers of a volume larger than 500 MB to 1 GB as large. But these numbers change, e.g., depending on the network connection of yourself and of your cluster or other factors.

The real work on a cluster gets done by the compute (or worker) nodes. compute nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.

All interaction with the compute nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Slurm). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the compute nodes.

For example, we can view all of the compute nodes by running the command sinfo.

[netid@dev-amd20 ~]$ sinfo
PARTITION           AVAIL  TIMELIMIT  NODES  STATE NODELIST
scavenger              up 7-00:00:00      3  inval css-122,lac-115,skl-138
scavenger              up 7-00:00:00     11 drain* amr-066,lac-[157,164,239,312,314,341,353],nch-000,skl-[154,160]
scavenger              up 7-00:00:00     13  down* csm-022,csn-[001,025],csp-[025-026],css-[033,074,097,102,117],lac-[133,148,390]
scavenger              up 7-00:00:00     27   comp acm-[037,039,041-042,061,064-065],amr-215,lac-[054,056,060-061,063,066,088-089,092,102,124-127,129,217,250,334-335]
scavenger              up 7-00:00:00      2   drng lac-[044,355]
scavenger              up 7-00:00:00      4  drain lac-[169,199,212],skl-028
scavenger              up 7-00:00:00      1   resv nch-001
scavenger              up 7-00:00:00    689    mix acm-[000-008,015-036,038,040,043-053,056-060,062-063,066-071],amr-[000-003,005,008,010-044,047-050,052-055,057-061,070,074,078,080-081,087,090-098,100-107,109-115,117-120,125-127,130,134-148,150-156,158-169,171-173,175,177-179,181-182,184-185,187-193,196-214,216-237,240-244,246-252],csn-[017,032],css-[049,055,059,063,066,089,106,118,123],lac-[000-021,023-025,028-031,035-037,043,046,048,050-052,055,057-059,062,064-065,067-078,082-085,087,090-091,093-101,103-111,114,116,118-122,130-132,134-147,149-156,158-163,165,167-168,170,173,180-189,192-194,200-209,213-216,218-225,229-231,233-238,240-248,251-252,254-261,282-283,285,288-305,307-311,313,315,317-333,342,344,347,350-352,361-362,365-369,388,392,400,408-415,419,421-425,427-428,442-443],nal-[000-007,009-010],nif-[000-004],nvf-[006-007,009,011,013-015,017,020],nvl-[001,003,007],qml-002,skl-[005,010-012,014-016,021,023,026-027,029-030,034-052,054-071,073-076,078-079,082-083,085-095,097-114,116-137,139,142,148-153,156-159,161-163,165-166],vim-[000-002]
scavenger              up 7-00:00:00    327  alloc acm-[009-014,054-055],amr-[004,006-007,009,045-046,051,056,062-065,067-069,071-073,075-077,079,082-086,088-089,099,108,116,121-124,128-129,131-133,149,157,170,174,176,180,183,186,194-195,253],csm-[001-005,017-018,020-021],csn-[002-011,013-016,018-024,026-031,033-037,039],csp-[006,016-020],css-[002-003,007-010,016-020,023,032,034-036,038-039,042-044,047,050,052,056-057,060-062,064,072,075,083,088,091,093-095,099,101,103,111-114,116,119,121,124,126-127],lac-[026-027,033,038,040-042,053,079-081,086,112-113,117,123,171-172,174-179,190-191,195-198,210-211,228,232,253,277-281,284,286-287,306,316,336-340,343,345-346,348-349,354,356-360,363-364,372,374-376,378-387,391,393-399,401-407,416-417,420,426,429-441,444-445],nal-008,nvf-[000-005,008,010,012,016,018-019],nvl-[000,002,004-006],qml-[000,003],skl-[000-003,006-009,013,017-020,022,024-025,031-033,053,072,077,080-081,084,096,115,140-141,143-147,155,164,167]
scavenger              up 7-00:00:00      2   idle nif-005,skl-004
ondemand               up 7-00:00:00      4  down* csn-[001,025],csp-025,css-097
ondemand               up 7-00:00:00      9    mix csn-[017,032],css-[049,055,059,063,066,089,118]
ondemand               up 7-00:00:00     74  alloc csm-001,csn-[002-011,013-016,018-024,026-031,033-036],csp-[006,016-018,020],css-[008-010,016-019,023,032,034-036,038-039,042-044,047,050,052,056-057,060-062,064,075,083,088,093-095,099,121,124,126],qml-000
general-short          up    4:00:00      2  inval lac-115,skl-138
general-short          up    4:00:00     11 drain* amr-066,lac-[157,164,239,312,314,341,353],nch-000,skl-[154,160]
general-short          up    4:00:00      3  down* lac-[133,148,390]
general-short          up    4:00:00     27   comp acm-[037,039,041-042,061,064-065],amr-215,lac-[054,056,060-061,063,066,088-089,092,102,124-127,129,217,250,334-335]
general-short          up    4:00:00      2   drng lac-[044,355]
general-short          up    4:00:00      4  drain lac-[169,199,212],skl-028
general-short          up    4:00:00      1   resv nch-001
general-short          up    4:00:00    671    mix acm-[000-008,015-036,038,040,043-053,056-060,062-063,066-069,071],amr-[000-003,005,008,010-044,047-050,052-055,057-061,070,074,078,080-081,087,090-098,100-107,109-115,117-120,125-127,130,134-148,150-156,158-169,171-173,175,177-179,181-182,184-185,187-193,196-214,216-237,240-244,246-252],lac-[000-021,023-025,028-031,035-037,043,046,048,050-052,055,057-059,062,064-065,067-078,082-085,087,090-091,093-101,103-111,114,116,118-122,130-132,134-147,149-156,158-163,165,167-168,170,173,180-189,192-194,200-209,213-216,218-225,229-231,233-238,240-248,251-252,254-261,282-283,285,288-305,307-311,313,315,317-333,342,344,347,350-352,361-362,365-369,388,392,400,408-415,419,421-425,427-428,442-443],nal-[000-003,009-010],nif-[001-004],nvf-[006-007,009,011,013-015,017,020],nvl-[001,003,007],skl-[005,010-012,014-016,021,023,026-027,029-030,034-052,054-071,073-076,078-079,082-083,085-095,097-114,116-137,139,142,148-153,156-159,161-163,165-166],vim-[000-002]
general-short          up    4:00:00    226  alloc acm-[009-014,054-055],amr-[004,006-007,009,045-046,051,056,062-065,067-069,071-073,075-077,079,082-086,088-089,099,108,116,121-124,128-129,131-133,149,157,170,174,176,180,183,186,194-195,253],lac-[026-027,033,038,040-042,053,079-081,086,112-113,117,123,171-172,174-179,190-191,195-198,210-211,228,232,253,277-281,284,286-287,306,316,336-340,343,345-346,348-349,354,356-360,363-364,372,374-376,378-387,391,393-399,401-407,416-417,420,426,429-441,444-445],nal-008,nvf-[000-005,008,010,012,016,018-019],nvl-[000,002,004-006],skl-[000-003,006-009,013,017-020,022,024-025,031-033,053,072,077,080-081,084,096,115,140-141,143-147,155,164,167]
general-short          up    4:00:00      2   idle nif-005,skl-004
general-long           up 7-00:00:00      1 drain* lac-353
general-long           up 7-00:00:00      1  down* lac-390
general-long           up 7-00:00:00      6   comp acm-[037,039,041-042],amr-215,lac-217
general-long           up 7-00:00:00      2   drng lac-[044,355]
general-long           up 7-00:00:00      1  drain skl-028
general-long           up 7-00:00:00    199    mix acm-[017-036,038,040,043-047],amr-[184-185,187-193,196-214,216-237,246-252],lac-[043,078,209,225,230-231,233-235,246-248,252,282-283,300-301,388,392,408-415,419,422-425,427-428,442-443],skl-[026-027,029-030,034-052,054-071,073-076,078-079,082-083,085-095,097-100,102-112,162-163,165-166]
general-long           up 7-00:00:00     90  alloc amr-[186,194-195,253],lac-[038,040-042,123,228,232,253,277-281,284,306,336-339,354,356-360,363-364,372,374-376,378-387,391,393-399,401-407,416-417,420,426,429-441,444-445],skl-[031-033,072,077,080-081,084,096,164,167]
general-long-bigmem    up 7-00:00:00      3   comp acm-[061,064-065]
general-long-bigmem    up 7-00:00:00      8    mix acm-[058,060,062-063,066-067],amr-103,vim-001
general-long-bigmem    up 7-00:00:00      5  alloc skl-[143-147]
general-long-gpu       up 7-00:00:00      1  drain lac-199
general-long-gpu       up 7-00:00:00     16    mix lac-[030,087,137,143,288-293,344],nal-[000-001,010],nvf-020,nvl-007
general-long-gpu       up 7-00:00:00      9  alloc lac-[195-198,348],nvf-[018-019],nvl-[005-006]

A lot of the nodes are busy running work for other users: we are not alone here!

There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically logon to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.

What’s in a Node?

All of the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside of it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.

/hpc-intro/Node%20anatomy

Explore Your Computer

Try to find out the number of CPUs and amount of memory available on your personal computer.

Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type Ctrl+d or exit:

[netid@dev-amd20 ~]$ exit
[user@laptop ~]$

Solution

There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:

  • Run system utilities
    [user@laptop ~]$ nproc --all
    [user@laptop ~]$ free -m
    
  • Read from /proc
    [user@laptop ~]$ cat /proc/cpuinfo
    [user@laptop ~]$ cat /proc/meminfo
    
  • Run system monitor
    [user@laptop ~]$ htop
    

Explore the Login Node

Now compare the resources of your computer with those of the login node.

Solution

[user@laptop ~]$ ssh netid@hpcc.msu.edu
[netid@dev-amd20 ~]$ nproc --all
[netid@dev-amd20 ~]$ free -m

You can get more information about the processors using lscpu, and a lot of detail about the memory by reading the file /proc/meminfo:

[netid@dev-amd20 ~]$ less /proc/meminfo

You can also explore the available filesystems using df to show disk free space. The -h flag renders the sizes in a human-friendly format, i.e., GB instead of B. The type flag -T shows what kind of filesystem each resource is.

[netid@dev-amd20 ~]$ df -Th

Different results from df

  • The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on).
  • Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include netid, depending on how it is mounted.

Shared Filesystems

This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!

Explore a Worker Node

Finally, let’s look at the resources available on the worker nodes where your jobs will actually run. Try running this command to see the name, CPUs and memory available on the worker nodes:

[netid@dev-amd20 ~]$ sinfo -n amr-252 -o "%n %c %m" | column -t

Compare Your Computer, the Login Node and the Compute Node

Compare your laptop’s number of processors and memory with the numbers you see on the cluster login node and compute node. What implications do you think the differences might have on running your research work on the different systems and nodes?

Solution

Compute nodes are usually built with processors that have higher core-counts than the login node or personal computers in order to support highly parallel tasks. Compute nodes usually also have substantially more memory (RAM) installed than a personal computer. More cores tends to help jobs that depend on some work that is easy to perform in parallel, and more, faster memory is key for large or complex numerical tasks.

Differences Between Nodes

Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphics Processing Units (GPUs or “video cards”).

With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!

Key Points

  • An HPC system is a set of networked machines.

  • HPC systems typically provide login nodes and a set of compute nodes.

  • The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).

  • Files saved on shared storage are available on all nodes.

  • The login node is a shared machine: be considerate of other users.


Scheduler Fundamentals

Overview

Teaching: 45 min
Exercises: 30 min
Questions
  • What is a scheduler and why does a cluster need one?

  • How do I launch a program to run on a compute node in the cluster?

  • How do I capture the output of a program that is run on a node in the cluster?

Objectives
  • Submit a simple script to the cluster.

  • Monitor the execution of jobs using command line tools.

  • Inspect the output and error files of your jobs.

  • Find the right place to put large datasets on the cluster.

Job Scheduler

An HPC system might have thousands of nodes and thousands of users. How do we decide who gets what and when? How do we ensure that a task is run with the resources it needs? This job is handled by a special piece of software called the scheduler. On an HPC system, the scheduler manages which jobs run where and when.

The following illustration compares these tasks of a job scheduler to a waiter in a restaurant. If you can relate to an instance where you had to wait for a while in a queue to get in to a popular restaurant, then you may now understand why sometimes your job do not start instantly as in your laptop.

/hpc-intro/Compare%20a%20job%20scheduler%20to%20a%20waiter%20in%20a%20restaurant

The scheduler used in this lesson is Slurm. Although Slurm is not used everywhere, running jobs is quite similar regardless of what software is being used. The exact syntax might change, but the concepts remain the same.

Running a Batch Job

The most basic use of the scheduler is to run a command non-interactively. Any command (or series of commands) that you want to run on the cluster is called a job, and the process of using a scheduler to run the job is called batch job submission.

In this case, the job we want to run is a shell script – essentially a text file containing a list of UNIX commands to be executed in a sequential manner. Our shell script will have three parts:

[netid@dev-amd20 ~]$ nano example-job.sh
#!/usr/bin/env bash

echo -n "This script is running on "
hostname

Creating Our Test Job

Run the script. Does it execute on the cluster or just our login node?

Solution

[netid@dev-amd20 ~]$ bash example-job.sh
This script is running on dev-amd20

This script ran on the login node, but we want to take advantage of the compute nodes: we need the scheduler to queue up example-job.sh to run on a compute node.

To submit this task to the scheduler, we use the sbatch command. This creates a job which will run the script when dispatched to a compute node which the queuing system has identified as being available to perform the work.

[netid@dev-amd20 ~]$ sbatch example-job.sh
Submitted batch job 28859234

And that’s all we need to do to submit a job. Our work is done – now the scheduler takes over and tries to run the job for us. While the job is waiting to run, it goes into a list of jobs called the queue. To check on our job’s status, we check the queue using the command squeue -u netid.

[netid@dev-amd20 ~]$ squeue -u netid
   JOBID PARTITION      NAME   USER ST       TIME  NODES NODELIST(REASON)
28859234 general-s  example-  netid R       0:05      1 lac-114

We can see all the details of our job, most importantly that it is in the R or RUNNING state. Sometimes our jobs might need to wait in a queue (PENDING) or have an error (E).

Where’s the Output?

On the login node, this script printed output to the terminal – but now, when squeue shows the job has finished, nothing was printed to the terminal.

Cluster job output is typically redirected to a file in the directory you launched it from. Use ls to find and cat to read the file.

Customising a Job

The job we just ran used all of the scheduler’s default options. In a real-world scenario, that’s probably not what we want. The default options represent a reasonable minimum. Chances are, we will need more cores, more memory, more time, among other special considerations. To get access to these resources we must customize our job script.

Comments in UNIX shell scripts (denoted by #) are typically ignored, but there are exceptions. For instance the special #! comment at the beginning of scripts specifies what program should be used to run it (you’ll typically see #!/usr/bin/env bash). Schedulers like Slurm also have a special comment used to denote special scheduler-specific options. Though these comments differ from scheduler to scheduler, Slurm’s special comment is #SBATCH. Anything following the #SBATCH comment is interpreted as an instruction to the scheduler.

Let’s illustrate this by example. By default, a job’s name is the name of the script, but the -J option can be used to change the name of a job. Add an option to the script:

[netid@dev-amd20 ~]$ cat example-job.sh
#!/usr/bin/env bash
#SBATCH -J hello-world

echo -n "This script is running on "
hostname

Submit the job and monitor its status:

[netid@dev-amd20 ~]$ sbatch example-job.sh
[netid@dev-amd20 ~]$ squeue -u netid
   JOBID PARTITION      NAME   USER ST       TIME  NODES NODELIST(REASON)
28859235 general-s  hello-wo  netid R       0:05      1 lac-114

Fantastic, we’ve successfully changed the name of our job!

Resource Requests

What about more important changes, such as the number of cores and memory for our jobs? One thing that is absolutely critical when working on an HPC system is specifying the resources required to run a job. This allows the scheduler to find the right time and place to schedule our job. If you do not specify requirements (such as the amount of time you need), you will likely be stuck with your site’s default resources, which is probably not what you want.

The following are several key resource requests:

Note that just requesting these resources does not make your job run faster, nor does it necessarily mean that you will consume all of these resources. It only means that these are made available to you. Your job may end up using less memory, or less time, or fewer nodes than you have requested, and it will still run.

It’s best if your requests accurately reflect your job’s requirements. We’ll talk more about how to make sure that you’re using resources effectively in a later episode of this lesson.

Submitting Resource Requests

Modify our hostname script so that it runs for a minute, then submit a job for it on the cluster.

Solution

[netid@dev-amd20 ~]$ cat example-job.sh
#!/usr/bin/env bash
#SBATCH -t 00:01 # timeout in HH:MM

echo -n "This script is running on "
sleep 20 # time in seconds
hostname
[netid@dev-amd20 ~]$ sbatch example-job.sh

Why are the Slurm runtime and sleep time not identical?

Job environment variables

When Slurm runs a job, it sets a number of environment variables for the job. One of these will let us check what directory our job script was submitted from. The SLURM_SUBMIT_DIR variable is set to the directory from which our job was submitted. Using the SLURM_SUBMIT_DIR variable, modify your job so that it prints out the location from which the job was submitted.

Solution

[netid@dev-amd20 ~]$ nano example-job.sh
[netid@dev-amd20 ~]$ cat example-job.sh
#!/usr/bin/env bash
#SBATCH -t 00:00:30

echo -n "This script is running on "
hostname

echo "This job was launched in the following directory:"
echo ${SLURM_SUBMIT_DIR}

Resource requests are typically binding. If you exceed them, your job will be killed. Let’s use wall time as an example. We will request 1 minute of wall time, and attempt to run a job for two minutes.

[netid@dev-amd20 ~]$ cat example-job.sh
#!/usr/bin/env bash
#SBATCH -J long_job
#SBATCH -t 00:01 # timeout in HH:MM

echo "This script is running on ... "
sleep 240 # time in seconds
hostname

Submit the job and wait for it to finish. Once it is has finished, check the log file.

[netid@dev-amd20 ~]$ sbatch example-job.sh
[netid@dev-amd20 ~]$ squeue -u netid
[netid@dev-amd20 ~]$ cat slurm-28923938.out
This script is running on ...
slurmstepd: error: *** JOB 28923938 ON amr-252 CANCELLED AT 2021-02-19T13:55:57
DUE TO TIME LIMIT ***

Our job was killed for exceeding the amount of resources it requested. Although this appears harsh, this is actually a feature. Strict adherence to resource requests allows the scheduler to find the best possible place for your jobs. Even more importantly, it ensures that another user cannot use more resources than they’ve been given. If another user messes up and accidentally attempts to use all of the cores or memory on a node, Slurm will either restrain their job to the requested resources or kill the job outright. Other jobs on the node will be unaffected. This means that one user cannot mess up the experience of others, the only jobs affected by a mistake in scheduling will be their own.

Tips for requesting resources

Users often wonder how to most efficiently request resources to spend less time queueing. Here are some tips:

  • Keep --time under four hours: This will allow your jobs to have access to the most nodes. This is because of buy-in nodes. Buy-in nodes are purchased by research groups, and when a group member submits a job, they are guaranteed access to this node within four hours (though their jobs could start sooner if there’s room on their buy-in node or somewhere else). By keeping your job to less than four hours, you are able to run on the buy-in nodes without blocking a member of that buy-in group.

  • Keep --mem less than 256GB and the total number of CPUs less than 40: This again ensures access to the widest range of nodes. Nodes that can support larger memory and numbers of CPUs are separated into a different partition that can have a larger backlog.

  • Check your efficiency after running: Use the seff -j <job_id> command to see how much of the resources requested were actually used. This allows you to tweak your submissions for future scripts or estimate usage in different conditions.

Cancelling a Job

Sometimes we’ll make a mistake and need to cancel a job. This can be done with the scancel command. Let’s submit a job and then cancel it using its job number (remember to change the walltime so that it runs long enough for you to cancel it before it is killed!).

[netid@dev-amd20 ~]$ sbatch example-job.sh
[netid@dev-amd20 ~]$ squeue -u netid
Submitted batch job 28925831

   JOBID PARTITION     NAME  USER ST       TIME  NODES NODELIST(REASON)
28925831 general-s long_job netid  R       0:03      1 amr-252

Now cancel the job with its job number (printed in your terminal). A clean return of your command prompt indicates that the request to cancel the job was successful.

[netid@dev-amd20 ~]$ scancel 38759
# It might take a minute for the job to disappear from the queue...
[netid@dev-amd20 ~]$ squeue -u netid
JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

Cancelling multiple jobs

We can also cancel all of our jobs at once using the -u option. This will delete all jobs for a specific user (in this case, yourself). Note that you can only delete your own jobs.

Try submitting multiple jobs and then cancelling them all.

Solution

First, submit a trio of jobs:

[netid@dev-amd20 ~]$ sbatch example-job.sh
[netid@dev-amd20 ~]$ sbatch example-job.sh
[netid@dev-amd20 ~]$ sbatch example-job.sh

Then, cancel them all:

[netid@dev-amd20 ~]$ scancel -u netid

Other Types of Jobs

Up to this point, we’ve focused on running jobs in batch mode. Slurm also provides the ability to start an interactive session.

There are very frequently tasks that need to be done interactively. Creating an entire job script might be overkill, but the amount of resources required is too much for a login node to handle. A good example of this might be building a genome index for alignment with a tool like HISAT2. Fortunately, we can run these types of tasks as a one-off with salloc.

salloc runs a single command on the cluster and then exits. Let’s demonstrate this by running the hostname command with salloc. (We can cancel an salloc job with Ctrl-c.)

[netid@dev-amd20 ~]$ salloc hostname
amr-252

salloc accepts all of the same options as sbatch. However, instead of specifying these in a script, these options are specified on the command-line when starting a job. To submit a job that uses 2 CPUs for instance, we could use the following command:

[netid@dev-amd20 ~]$ salloc -n 2 echo "This job will use 2 CPUs."
This job will use 2 CPUs.
This job will use 2 CPUs.

Typically, the resulting shell environment will be the same as that for sbatch.

Interactive jobs

Sometimes, you will need a lot of resources for interactive use. Perhaps it’s our first time running an analysis or we are attempting to debug something that went wrong with a previous job. Fortunately, Slurm makes it easy to start an interactive job with salloc:

[netid@dev-amd20 ~]$ salloc

You should be presented with a bash prompt. Note that the prompt will likely change to reflect your new location, in this case the compute node we are logged on. You can also verify this with hostname.

salloc can be passed all of the same options that you use to create a job script, but will use the default options (for better or for worse). However, to make life easier, provides an alias called interact with reasonable default arguments:

[netid@dev-amd20 ~]$ interact

To see its arguments use the help flag:

[netid@dev-amd20 ~]$ interact -h

When you are done with an interactive job, type exit to quit your session.

Creating remote graphics

To see graphical output inside your jobs, we recommend that you use OnDemand’s Interactive Desktop application. Choose it from the list of Interactive Applications, fill in the same information you use in a Slurm job script, and press the “Launch” button. A job to create this desktop will be automatically submitted for you, which you can then access by clicking the “Launch “ button.

You will be taken to a more tradtional graphical user interface. If you would like to run a command from a terminal, go to the Applications menu, and under System Tools, select Terminal.

Key Points

  • The scheduler handles how compute resources are shared between users.

  • A job is just a shell script.

  • Request slightly more resources than you will need.


Accessing software via Modules

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How do we load and unload software packages?

Objectives
  • Load and use a software package.

  • Explain how the shell environment changes when the module mechanism loads or unloads packages.

On a high-performance computing system, it is seldom the case that the software we want to use is available when we log in. It is installed, but we will need to “load” it before it can run.

Before we start using individual software packages, however, we should understand the reasoning behind this approach. The three biggest factors are:

Software incompatibility is a major headache for programmers. Sometimes the presence (or absence) of a software package will break others that depend on it. Two well known examples are Python and C compiler versions. Python 3 famously provides a python command that conflicts with that provided by Python 2. Software compiled against a newer version of the C libraries and then run on a machine that has older C libraries installed will result in a nasty 'GLIBCXX_3.4.20' not found error.

Software versioning is another common issue. A team might depend on a certain package version for their research project - if the software version was to change (for instance, if a package was updated), it might affect their results. Having access to multiple software versions allows a set of researchers to prevent software versioning issues from affecting their results.

Dependencies are where a particular software package (or even a particular version) depends on having access to another software package (or even a particular version of another software package). For example, the VASP materials science software may depend on having a particular version of the FFTW (Fastest Fourier Transform in the West) software library available for it to work.

Environment Modules

Environment modules are the solution to these problems. A module is a self-contained description of a software package – it contains the settings required to run a software package and, usually, encodes required dependencies on other software packages.

There are a number of different environment module implementations commonly used on HPC systems: the two most common are TCL modules and Lmod. Both of these use similar syntax and the concepts are the same so learning to use one will allow you to use whichever is installed on the system you are using. In both implementations the module command is used to interact with environment modules. An additional subcommand is usually added to the command to specify what you want to do. For a list of subcommands you can use module -h or module help. As for all commands, you can access the full help on the man pages with man module.

On login you may start out with a default set of modules loaded or you may start out with an empty environment; this depends on the setup of the system you are using.

Listing Available Modules

To see available software modules, use module avail:

[netid@dev-amd20 ~]$ module avail
------------ /opt/modules/MPI/GCC/6.4.0-2.28/OpenMPI/2.1.2 ------------
   ABySS/2.0.2
   ABySS/2.1.1
   ABySS/2.1.5                                (D)
   ANTs/2.3.2
   ATK/2.28.1
   Armadillo/8.400.0
   BBMap/37.93
   BCFtools/1.9
   BLAST+/2.7.1                               (D)
   BUSCO/3.1.0-Python-3.6.4
   BWA/0.7.17
   BamTools/2.5.1
   BioPerl/1.7.2-Perl-5.26.1
   Boost.Python/1.66.0-Python-3.6.4

[removed most of the output here for clarity]

  Where:
   L:        Module is loaded
   Aliases:  Aliases exist: foo/1.2.3 (1.2) means that "module load foo/1.2"
             will load foo/1.2.3
   D:        Default Module

Use "module spider" to find all possible modules and extensions.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".

Listing Currently Loaded Modules

You can use the module list command to see which modules you currently have loaded in your environment. If you have no modules loaded, you will see a message telling you so

[netid@dev-amd20 ~]$ module list
Currently Loaded Modules:
  1) GCCcore/6.4.0    6) OpenMPI/2.1.2    11) ScaLAPACK/2.0.2-OpenBLAS-0.2.20  16) ncurses/6.0      21) libffi/3.2.1
  2) binutils/2.28    7) tbb/2018_U3      12) bzip2/1.0.6                      17) libreadline/7.0  22) Python/3.6.4
  3) GNU/6.4.0-2.28   8) imkl/2018.1.163  13) zlib/1.2.11                      18) Tcl/8.6.8        23) Java/1.8.0_152
  4) numactl/2.0.11   9) OpenBLAS/0.2.20  14) Boost/1.67.0                     19) SQLite/3.21.0    24) MATLAB/2018a
  5) hwloc/1.11.8    10) FFTW/3.3.7       15) CMake/3.11.1                     20) GMP/6.1.2        25) powertools/1.2

You should generally find and select modules based on your needs rather than using the defaults. We alway recommend clearing out all modules before you try loading new ones to help avoid conflicts. To do this, use the module purge command:

[netid@dev-amd20 ~]$ module purge
[netid@dev-amd20 ~]$ module list
No modules loaded

Loading and Unloading Software

To load a software module, use module load. In this example we will use Python 3.

Initially, there is only one version of Python 3 that is available. We can test this by using the which command. which looks for programs the same way that Bash does, so we can use it to tell us where a particular piece of software is stored.

[netid@dev-amd20 ~]$ which python3

If python3 did not exist we would see output like

which: no python3 in (
/opt/software/core/lua/lua/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:
/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/hpcc/bin:/usr/lpp/mmfs/bin:
/opt/ibutils/bin:/opt/puppetlabs/bin:/opt/dell/srvadmin/bin)

However, in our case we do have an existing python3 available so we see

/usr/bin/python3

We need a different Python than the system provided one though, so let us load a module to access it.

We can load the python3 command with module load:

[netid@dev-amd20 ~]$ module load GCC/10.2.0 OpenMPI/4.0.5 Python/3.8.6
[netid@dev-amd20 ~]$ which python3
/opt/software/Python/3.8.6-GCCcore-10.2.0-new/bin/python3

So, what just happened?

To understand the output, first we need to understand the nature of the $PATH environment variable. $PATH is a special environment variable that controls where a UNIX system looks for software. Specifically $PATH is a list of directories (separated by :) that the OS searches through for a command before giving up and telling us it can’t find it. As with all environment variables we can print it out using echo.

[netid@dev-amd20 ~]$ echo $PATH
/opt/software/Python/3.8.6-GCCcore-10.2.0-new/bin:/opt/software/SQLite/3.33.0-GCCcore-10.2.0/bin:/opt/software/Tcl/8.6.10-GCCcore-10.2.0/bin:/opt/software/ncurses/6.2-GCCcore-10.2.0/bin:/opt/software/bzip2/1.0.8-GCCcore-10.2.0/bin:/opt/software/OpenMPI/4.0.5-GCC-10.2.0/bin:/opt/software/libfabric/1.11.0-GCCcore-10.2.0/bin:/opt/software/UCX/1.9.0-GCCcore-10.2.0/bin:/opt/software/hwloc/2.2.0-GCCcore-10.2.0/sbin:/opt/software/hwloc/2.2.0-GCCcore-10.2.0/bin:/opt/software/libxml2/2.9.10-GCCcore-10.2.0/bin:/opt/software/XZ/5.2.5-GCCcore-10.2.0/bin:/opt/software/numactl/2.0.13-GCCcore-10.2.0/bin:/opt/software/binutils/2.35-GCCcore-10.2.0/bin:/opt/software/GCCcore/10.2.0/bin:/opt/software/powertools/bin:/opt/software/core/lua/lua/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/hpcc/bin:/usr/lpp/mmfs/bin:/opt/ibutils/bin:/opt/puppetlabs/bin:/opt/dell/srvadmin/bin

You’ll notice a similarity to the output of the which command. In this case, there’s only one difference: the different directory at the beginning. When we ran the module load command, it added a directory to the beginning of our $PATH. Let’s examine what’s there:

[netid@dev-amd20 ~]$ ls /opt/software/Python/3.8.6-GCCcore-10.2.0-new/bin
2to3              netaddr        pytest                 rst2s5.py
2to3-3.8          nosetests      py.test                rst2xetex.py
chardetect        nosetests-3.8  python                 rst2xml.py
cygdb             pasteurize     python3                rstpep2html.py
cython            pbr            python3.8              runxlrd.py
cythonize         pip            python3.8-config       sphinx-apidoc
doesitcache       pip3           python3-config         sphinx-autogen
easy_install      pip3.8         rst2html4.py           sphinx-build
easy_install-3.8  pkginfo        rst2html5.py           sphinx-quickstart
flit              poetry         rst2html.py            tabulate
futurize          pybabel        rst2latex.py           virtualenv
idle3             __pycache__    rst2man.py             wheel
idle3.8           pydoc3         rst2odt_prepstyles.py
jsonschema        pydoc3.8       rst2odt.py
keyring           pygmentize     rst2pseudoxml.py

Taking this to its conclusion, module load will add software to your $PATH. It “loads” software. A special note on this - depending on which version of the module program that is installed at your site, module load will also load required software dependencies.

To demonstrate, let’s use module list. module list shows all loaded software modules.

[netid@dev-amd20 ~]$ module list
Currently Loaded Modules:
  1) powertools/1.2   8) libxml2/2.9.10     15) ncurses/6.2
  2) GCCcore/10.2.0   9) libpciaccess/0.16  16) libreadline/8.0
  3) zlib/1.2.11     10) hwloc/2.2.0        17) Tcl/8.6.10
  4) binutils/2.35   11) UCX/1.9.0          18) SQLite/3.33.0
  5) GCC/10.2.0      12) libfabric/1.11.0   19) GMP/6.2.0
  6) numactl/2.0.13  13) OpenMPI/4.0.5      20) libffi/3.3
  7) XZ/5.2.5        14) bzip2/1.0.8        21) Python/3.8.6
[netid@dev-amd20 ~]$ module load GROMACS
[netid@dev-amd20 ~]$ module list
Currently Loaded Modules:
Currently Loaded Modules:
  1) powertools/1.2     11) UCX/1.9.0         21) Python/3.8.6
  2) GCCcore/10.2.0     12) libfabric/1.11.0  22) OpenBLAS/0.3.13
  3) zlib/1.2.11        13) OpenMPI/4.0.5     23) FFTW/3.3.8
  4) binutils/2.35      14) bzip2/1.0.8       24) ScaLAPACK/2.1.0
  5) GCC/10.2.0         15) ncurses/6.2       25) pybind11/2.6.0
  6) numactl/2.0.13     16) libreadline/8.0   26) SciPy-bundle/2020.11
  7) XZ/5.2.5           17) Tcl/8.6.10        27) networkx/2.5
  8) libxml2/2.9.10     18) SQLite/3.33.0     28) GROMACS/2021
  9) libpciaccess/0.16  19) GMP/6.2.0
 10) hwloc/2.2.0        20) libffi/3.3

So in this case, loading the GROMACS module (a bioinformatics software package), also loaded GMP/6.2.0 and SciPy-bundle/2020.11 as well. Let’s try unloading the GROMACS package.

[netid@dev-amd20 ~]$ module unload GROMACS
[netid@dev-amd20 ~]$ module list
Currently Loaded Modules:
  1) powertools/1.2   8) libxml2/2.9.10     15) ncurses/6.2
  2) GCCcore/10.2.0   9) libpciaccess/0.16  16) libreadline/8.0
  3) zlib/1.2.11     10) hwloc/2.2.0        17) Tcl/8.6.10
  4) binutils/2.35   11) UCX/1.9.0          18) SQLite/3.33.0
  5) GCC/10.2.0      12) libfabric/1.11.0   19) GMP/6.2.0
  6) numactl/2.0.13  13) OpenMPI/4.0.5      20) libffi/3.3
  7) XZ/5.2.5        14) bzip2/1.0.8        21) Python/3.8.6

So using module unload “un-loads” a module, and depending on how a site is configured it may also unload all of the dependencies (in our case it does). If we wanted to unload everything at once, we could run module purge (unloads everything).

[netid@dev-amd20 ~]$ module purge
[netid@dev-amd20 ~]$ module list
No modules loaded

Note that module purge is informative. It will also let us know if a default set of “sticky” packages cannot be unloaded (and how to actually unload these if we truly so desired).

Note that this module loading process happens principally through the manipulation of environment variables like $PATH. There is usually little or no data transfer involved.

The module loading process manipulates other special environment variables as well, including variables that influence where the system looks for software libraries, and sometimes variables which tell commercial software packages where to find license servers.

The module command also restores these shell environment variables to their previous state when a module is unloaded.

Using Software Modules in Scripts

Create a job that is able to run python3 --version. Remember, no software is loaded by default! Running a job is just like logging on to the system (you should not assume a module loaded on the login node is loaded on a compute node).

Solution

[netid@dev-amd20 ~]$ nano python-module.sh
[netid@dev-amd20 ~]$ cat python-module.sh
#!/usr/bin/env bash
#SBATCH #SBATCH -t 00:00:30

module purge
module load GCC/10.2.0 OpenMPI/4.0.5 Python/3.8.6

python3 --version
[netid@dev-amd20 ~]$ sbatch python-module.sh

Software Versioning

So far, we’ve learned how to load and unload software packages. This is very useful. However, we have not yet addressed the issue of software versioning. At some point or other, you will run into issues where only one particular version of some software will be suitable. Perhaps a key bugfix only happened in a certain version, or version X broke compatibility with a file format you use. In either of these example cases, it helps to be very specific about what software is loaded.

Let’s use Python 3 as an example and try to load another version. We can see what versions are available using the module spider command:

[netid@dev-amd20 ~]$ module spider Python
----------------------------------------------------------------------------
  Python:
----------------------------------------------------------------------------
    Description:
      Python is a programming language that lets you work more quickly and
      integrate your systems more effectively.

     Versions:
        Python/2.7.9
        Python/2.7.10
        Python/2.7.11
...skipping...
        Python/3.10.8-bare
        Python/3.10.8
        Python/3.11.3
     Other possible modules matches:
        Biopython  Boost.Python  GitPython  IPython  ScientificPython  ...

----------------------------------------------------------------------------
  To find other possible module matches execute:

      $ module -r spider '.*Python.*'

----------------------------------------------------------------------------
  For detailed information about a specific "Python" package (including how to load the modules) use the module's full name.
  Note that names that have a trailing (E) are extensions provided by other modules.
  For example:

     $ module spider Python/3.11.3
----------------------------------------------------------------------------

To find out what command to use to load a specific version, we can use module spider with that version:

 module spider Python/3.11.3
----------------------------------------------------------------------------
  Python: Python/3.11.3
----------------------------------------------------------------------------
    Description:
      Python is a programming language that lets you work more quickly and
      integrate your systems more effectively.


    You will need to load all module(s) on any one of the lines below before the "Python/3.11.3" module is available to load.

      GCCcore/12.3.0
 
    Help:
      Description
      ===========
      Python is a programming language that lets you work more quickly and integrate your systems
       more effectively.
      
      
      More information
      ================
       - Homepage: https://python.org/
      
      
      Included extensions
      ===================
      flit_core-3.9.0, pip-23.1.2, setuptools-67.7.2, wheel-0.40.0

In this case, we need to load the GCCcore/12.3.0 module first first, since Python/3.11.3 needs it to be able to run

[netid@dev-amd20 ~]$ module load GCCcore/12.3.0 Python/3.11.3

Dependency consistency

If you need to load multiple pieces of software at the same time, you need to make sure they use the same versions of any common dependencies. There is unfortunately not a good solution to this yet! The best way is to use module spider on different versions to try to find a match.

Key Points

  • Load software with module load softwareName.

  • Unload software with module unload

  • Clean out all loaded modules with module purge to avoid conflicts

  • Multiple pieces of software need to use compatible dependencies


Transferring files with remote computers

Overview

Teaching: 15 min
Exercises: 15 min
Questions
  • How do I transfer files to (and from) the cluster?

Objectives
  • Transfer files to and from a computing cluster.

Performing work on a remote computer is not very useful if we cannot get files to or from the cluster. There are several options for transferring data between computing resources using CLI and GUI utilities, a few of which we will cover.

Download Lesson Files From the Internet

One of the most straightforward ways to download files is to use either curl or wget. One of these is usually installed in most Linux shells, on Mac OS terminal and in GitBash. Any file that can be downloaded in your web browser through a direct link can be downloaded using curl or wget. This is a quick way to download datasets or source code. The syntax for these commands is

Try it out by downloading some material we’ll use later on, from a terminal on your local machine, using the URL of the current codebase:

https://github.com/hpc-carpentry/amdahl/tarball/main

Download the “Tarball”

The word “tarball” in the above URL refers to a compressed archive format commonly used on Linux, which is the operating system the majority of HPC cluster machines run. A tarball is a lot like a .zip file. The actual file extension is .tar.gz, which reflects the two-stage process used to create the file: the files or folders are merged into a single file using tar, which is then compressed using gzip, so the file extension is “tar-dot-g-z.” That’s a mouthful, so people often say “the xyz tarball” instead.

You may also see the extension .tgz, which is just an abbreviation of .tar.gz.

By default, curl and wget download files to the same name as the URL: in this case, main. Use one of the above commands to save the tarball as amdahl.tar.gz.

wget and curl Commands

[user@laptop ~]$ wget -O amdahl.tar.gz https://github.com/hpc-carpentry/amdahl/tarball/main
# or
[user@laptop ~]$ curl -o amdahl.tar.gz https://github.com/hpc-carpentry/amdahl/tarball/main

After downloading the file, use ls to see it in your working directory:

[user@laptop ~]$ ls

Archiving Files

One of the biggest challenges we often face when transferring data between remote HPC systems is that of large numbers of files. There is an overhead to transferring each individual file and when we are transferring large numbers of files these overheads combine to slow down our transfers to a large degree.

The solution to this problem is to archive multiple files into smaller numbers of larger files before we transfer the data to improve our transfer efficiency. Sometimes we will combine archiving with compression to reduce the amount of data we have to transfer and so speed up the transfer. The most common archiving command you will use on a (Linux) HPC cluster is tar.

tar can be used to combine files and folders into a single archive file and, optionally, compress the result. Let’s look at the file we downloaded from the lesson site, amdahl.tar.gz.

The .gz part stands for gzip, which is a compression library. It’s common (but not necessary!) that this kind of file can be interpreted by reading its name: it appears somebody took files and folders relating to something called “amdahl,” wrapped them all up into a single file with tar, then compressed that archive with gzip to save space.

Let’s see if that is the case, without unpacking the file. tar prints the “table of contents” with the -t flag, for the file specified with the -f flag followed by the filename. Note that you can concatenate the two flags: writing -t -f is interchangeable with writing -tf together. However, the argument following -f must be a filename, so writing -ft will not work.

[user@laptop ~]$ tar -tf amdahl.tar.gz
hpc-carpentry-amdahl-46c9b4b/
hpc-carpentry-amdahl-46c9b4b/.github/
hpc-carpentry-amdahl-46c9b4b/.github/workflows/
hpc-carpentry-amdahl-46c9b4b/.github/workflows/python-publish.yml
hpc-carpentry-amdahl-46c9b4b/.gitignore
hpc-carpentry-amdahl-46c9b4b/LICENSE
hpc-carpentry-amdahl-46c9b4b/README.md
hpc-carpentry-amdahl-46c9b4b/amdahl/
hpc-carpentry-amdahl-46c9b4b/amdahl/__init__.py
hpc-carpentry-amdahl-46c9b4b/amdahl/__main__.py
hpc-carpentry-amdahl-46c9b4b/amdahl/amdahl.py
hpc-carpentry-amdahl-46c9b4b/requirements.txt
hpc-carpentry-amdahl-46c9b4b/setup.py

This example output shows a folder which contains a few files, where 46c9b4b is an 8-character git commit hash that will change when the source material is updated.

Now let’s unpack the archive. We’ll run tar with a few common flags:

Extract the Archive

Using the flags above, unpack the source code tarball into a new directory named “amdahl” using tar.

[user@laptop ~]$ tar -xvzf amdahl.tar.gz
hpc-carpentry-amdahl-46c9b4b/
hpc-carpentry-amdahl-46c9b4b/.github/
hpc-carpentry-amdahl-46c9b4b/.github/workflows/
hpc-carpentry-amdahl-46c9b4b/.github/workflows/python-publish.yml
hpc-carpentry-amdahl-46c9b4b/.gitignore
hpc-carpentry-amdahl-46c9b4b/LICENSE
hpc-carpentry-amdahl-46c9b4b/README.md
hpc-carpentry-amdahl-46c9b4b/amdahl/
hpc-carpentry-amdahl-46c9b4b/amdahl/__init__.py
hpc-carpentry-amdahl-46c9b4b/amdahl/__main__.py
hpc-carpentry-amdahl-46c9b4b/amdahl/amdahl.py
hpc-carpentry-amdahl-46c9b4b/requirements.txt
hpc-carpentry-amdahl-46c9b4b/setup.py

Note that we did not need to type out -x -v -z -f, thanks to flag concatenation, though the command works identically either way – so long as the concatenated list ends with f, because the next string must specify the name of the file to extract.

The folder has an unfortunate name, so let’s change that to something more convenient.

[user@laptop ~]$ mv hpc-carpentry-amdahl-46c9b4b amdahl

Check the size of the extracted directory and compare to the compressed file size, using du for “disk usage”.

[user@laptop ~]$ du -sh amdahl.tar.gz
8.0K     amdahl.tar.gz
[user@laptop ~]$ du -sh amdahl
48K    amdahl

Text files (including Python source code) compress nicely: the “tarball” is one-sixth the total size of the raw data!

If you want to reverse the process – compressing raw data instead of extracting it – set a c flag instead of x, set the archive filename, then provide a directory to compress:

[user@laptop ~]$ tar -cvzf compressed_code.tar.gz amdahl
amdahl/
amdahl/.github/
amdahl/.github/workflows/
amdahl/.github/workflows/python-publish.yml
amdahl/.gitignore
amdahl/LICENSE
amdahl/README.md
amdahl/amdahl/
amdahl/amdahl/__init__.py
amdahl/amdahl/__main__.py
amdahl/amdahl/amdahl.py
amdahl/requirements.txt
amdahl/setup.py

If you give amdahl.tar.gz as the filename in the above command, tar will update the existing tarball with any changes you made to the files. That would mean adding the new amdahl folder to the existing folder (hpc-carpentry-amdahl-46c9b4b) inside the tarball, doubling the size of the archive!

Working with Windows

When you transfer text files from a Windows system to a Unix system (Mac, Linux, BSD, Solaris, etc.) this can cause problems. Windows encodes its files slightly different than Unix, and adds an extra character to every line.

On a Unix system, every line in a file ends with a \n (newline). On Windows, every line in a file ends with a \r\n (carriage return + newline). This causes problems sometimes.

Though most modern programming languages and software handles this correctly, in some rare instances, you may run into an issue. The solution is to convert a file from Windows to Unix encoding with the dos2unix command.

You can identify if a file has Windows line endings with cat -A filename. A file with Windows line endings will have ^M$ at the end of every line. A file with Unix line endings will have $ at the end of a line.

To convert the file, just run dos2unix filename. (Conversely, to convert back to Windows format, you can run unix2dos filename.)

Transferring Single Files and Folders With scp

To copy a single file to or from the cluster, we can use scp (“secure copy”). The syntax can be a little complex for new users, but we’ll break it down. The scp command is a relative of the ssh command we used to access the system, and can use the same public-key authentication mechanism.

To upload to another computer, the template command is

[user@laptop ~]$ scp local_file netid@hpcc.msu.edu:remote_destination

in which @ and : are field separators and remote_destination is a path relative to your remote home directory, or a new filename if you wish to change it, or both a relative path and a new filename. If you don’t have a specific folder in mind you can omit the remote_destination and the file will be copied to your home directory on the remote computer (with its original name). If you include a remote_destination, note that scp interprets this the same way cp does when making local copies: if it exists and is a folder, the file is copied inside the folder; if it exists and is a file, the file is overwritten with the contents of local_file; if it does not exist, it is assumed to be a destination filename for local_file.

Upload the lesson material to your remote home directory like so:

[user@laptop ~]$ scp amdahl.tar.gz netid@hpcc.msu.edu:

Why Not Download on ICER HPCC Directly?

Most computer clusters are protected from the open internet by a firewall. For enhanced security, some are configured to allow traffic inbound, but not outbound. This means that an authenticated user can send a file to a cluster machine, but a cluster machine cannot retrieve files from a user’s machine or the open Internet.

Try downloading the file directly. Note that it may well fail, and that’s OK!

Commands

[user@laptop ~]$ ssh netid@hpcc.msu.edu
[netid@dev-amd20 ~]$ wget -O amdahl.tar.gz https://github.com/hpc-carpentry/amdahl/tarball/main
# or
[netid@dev-amd20 ~]$ curl -o amdahl.tar.gz https://github.com/hpc-carpentry/amdahl/tarball/main

Did it work? If not, what does the terminal output tell you about what happened?

Transferring a Directory

To transfer an entire directory, we add the -r flag for “recursive”: copy the item specified, and every item below it, and every item below those… until it reaches the bottom of the directory tree rooted at the folder name you provided.

[user@laptop ~]$ scp -r amdahl netid@hpcc.msu.edu:

Caution

For a large directory – either in size or number of files – copying with -r can take a long time to complete.

When using scp, you may have noticed that a : always follows the remote computer name. A string after the : specifies the remote directory you wish to transfer the file or folder to, including a new name if you wish to rename the remote material. If you leave this field blank, scp defaults to your home directory and the name of the local material to be transferred.

On Linux computers, / is the separator in file or directory paths. A path starting with a / is called absolute, since there can be nothing above the root /. A path that does not start with / is called relative, since it is not anchored to the root.

If you want to upload a file to a location inside your home directory – which is often the case – then you don’t need a leading /. After the :, you can type the destination path relative to your home directory. If your home directory is the destination, you can leave the destination field blank, or type ~ – the shorthand for your home directory – for completeness.

With scp, a trailing slash on the target directory is optional, and has no effect. It is important for other commands, like rsync.

A Note on rsync

As you gain experience with transferring files, you may find the scp command limiting. The rsync utility provides advanced features for file transfer and is typically faster compared to both scp and sftp (see below). It is especially useful for transferring large and/or many files and for synchronizing folder contents between computers.

The syntax is similar to scp. To transfer to another computer with commonly used options:

[user@laptop ~]$ rsync -avP amdahl.tar.gz netid@hpcc.msu.edu:

The options are:

  • -a (archive) to preserve file timestamps, permissions, and folders, among other things; implies recursion
  • -v (verbose) to get verbose output to help monitor the transfer
  • -P (partial/progress) to preserve partially transferred files in case of an interruption and also displays the progress of the transfer.

To recursively copy a directory, we can use the same options:

[user@laptop ~]$ rsync -avP amdahl netid@hpcc.msu.edu:~/

As written, this will place the local directory and its contents under your home directory on the remote system. If the trailing slash is omitted on the destination, a new directory corresponding to the transferred directory will not be created, and the contents of the source directory will be copied directly into the destination directory.

To download a file, we simply change the source and destination:

[user@laptop ~]$ rsync -avP netid@hpcc.msu.edu:amdahl ./

File transfers using both scp and rsync use SSH to encrypt data sent through the network. So, if you can connect via SSH, you will be able to transfer files. By default, SSH uses network port 22. If a custom SSH port is in use, you will have to specify it using the appropriate flag, often -p, -P, or --port. Check --help or the man page if you’re unsure.

Change the Rsync Port

Say we have to connect rsync through port 768 instead of 22. How would we modify this command?

[user@laptop ~]$ rsync amdahl.tar.gz netid@hpcc.msu.edu:

Hint: check the man page or “help” for rsync.

Solution

[user@laptop ~]$ man rsync
[user@laptop ~]$ rsync --help | grep port
     --port=PORT             specify double-colon alternate port number
See http://rsync.samba.org/ for updates, bug reports, and answers
[user@laptop ~]$ rsync --port=768 amdahl.tar.gz netid@hpcc.msu.edu:

(Note that this command will fail, as the correct port in this case is the default: 22.)

Transferring Files Interactively with FileZilla

FileZilla is a cross-platform client for downloading and uploading files to and from a remote computer. It is absolutely fool-proof and always works quite well. It uses the sftp protocol. You can read more about using the sftp protocol in the command line in the lesson discussion.

Download and install the FileZilla client from https://filezilla-project.org. After installing and opening the program, you should end up with a window with a file browser of your local system on the left hand side of the screen. When you connect to the cluster, your cluster files will appear on the right hand side.

To connect to the cluster, we’ll just need to enter our credentials at the top of the screen:

Hit “Quickconnect” to connect. You should see your remote files appear on the right hand side of the screen. You can drag-and-drop files between the left (local) and right (remote) sides of the screen to transfer files.

Finally, if you need to move large files (typically larger than a gigabyte) from one remote computer to another remote computer, SSH in to the computer hosting the files and use scp or rsync to transfer over to the other. This will be more efficient than using FileZilla (or related applications) that would copy from the source to your local machine, then to the destination machine.

Key Points

  • wget and curl -O download a file from the internet.

  • scp and rsync transfer files to and from your computer.

  • You can use an SFTP client like FileZilla to transfer files through a GUI.


Running a parallel job

Overview

Teaching: 30 min
Exercises: 60 min
Questions
  • How do we execute a task in parallel?

  • What benefits arise from parallel execution?

  • What are the limits of gains from execution in parallel?

Objectives
  • Install a Python package using pip

  • Prepare a job submission script for the parallel executable.

  • Launch jobs with parallel execution.

  • Record and summarize the timing and accuracy of jobs.

  • Describe the relationship between job parallelism and performance.

We now have the tools we need to run a multi-processor job. This is a very important aspect of HPC systems, as parallelism is one of the primary tools we have to improve the performance of computational tasks.

If you disconnected, log back in to the cluster.

[user@laptop ~]$ ssh netid@hpcc.msu.edu

Install the Amdahl Program

With the Amdahl source code on the cluster, we can install it, which will provide access to the amdahl executable. Move into the extracted directory, then use the Package Installer for Python, or pip, to install it in your (“user”) home directory:

[netid@dev-amd20 ~]$ cd amdahl
[netid@dev-amd20 ~]$ python3 -m pip install --user .

Amdahl is Python Code

The Amdahl program is written in Python, and installing or using it requires locating the python3 executable on the login node. If it can’t be found, try listing available modules using module avail, load the appropriate one, and try the command again.

MPI for Python

The Amdahl code has one dependency: mpi4py. If it hasn’t already been installed on the cluster, pip will attempt to collect mpi4py from the Internet and install it for you. If this fails due to a one-way firewall, you must retrieve mpi4py on your local machine and upload it, just as we did for Amdahl.

Retrieve and Upload mpi4py

If installing Amdahl failed because mpi4py could not be installed, retrieve the tarball from https://github.com/mpi4py/mpi4py/tarball/master then rsync it to the cluster, extract, and install:

[user@laptop ~]$ wget -O mpi4py.tar.gz https://github.com/mpi4py/mpi4py/releases/download/3.1.4/mpi4py-3.1.4.tar.gz
[user@laptop ~]$ scp mpi4py.tar.gz netid@hpcc.msu.edu:
# or
[user@laptop ~]$ rsync -avP mpi4py.tar.gz netid@hpcc.msu.edu:
[user@laptop ~]$ ssh netid@hpcc.msu.edu
[netid@dev-amd20 ~]$ tar -xvzf mpi4py.tar.gz  # extract the archive
[netid@dev-amd20 ~]$ mv mpi4py* mpi4py        # rename the directory
[netid@dev-amd20 ~]$ cd mpi4py
[netid@dev-amd20 ~]$ python3 -m pip install --user .
[netid@dev-amd20 ~]$ cd ../amdahl
[netid@dev-amd20 ~]$ python3 -m pip install --user .

If pip Raises a Warning…

pip may warn that your user package binaries are not in your PATH.

WARNING: The script amdahl is installed in "${HOME}/.local/bin" which is
not on PATH. Consider adding this directory to PATH or, if you prefer to
suppress this warning, use --no-warn-script-location.

To check whether this warning is a problem, use which to search for the amdahl program:

[netid@dev-amd20 ~]$ which amdahl

If the command returns no output, displaying a new prompt, it means the file amdahl has not been found. You must update the environment variable named PATH to include the missing folder. Edit your shell configuration file as follows, then log off the cluster and back on again so it takes effect.

[netid@dev-amd20 ~]$ nano ~/.bashrc
[netid@dev-amd20 ~]$ tail ~/.bashrc
export PATH=${PATH}:${HOME}/.local/bin

After logging back in to hpcc.msu.edu, which should be able to find amdahl without difficulties. If you had to load a Python module, load it again.

Help!

Many command-line programs include a “help” message. Try it with amdahl:

[netid@dev-amd20 ~]$ amdahl --help
usage: amdahl [-h] [-p [PARALLEL_PROPORTION]] [-w [WORK_SECONDS]] [-t] [-e] [-j [JITTER_PROPORTION]]

optional arguments:
  -h, --help            show this help message and exit
  -p [PARALLEL_PROPORTION], --parallel-proportion [PARALLEL_PROPORTION]
                        Parallel proportion: a float between 0 and 1
  -w [WORK_SECONDS], --work-seconds [WORK_SECONDS]
                        Total seconds of workload: an integer greater than 0
  -t, --terse           Format output as a machine-readable object for easier analysis
  -e, --exact           Exactly match requested timing by disabling random jitter
  -j [JITTER_PROPORTION], --jitter-proportion [JITTER_PROPORTION]
                        Random jitter: a float between -1 and +1

This message doesn’t tell us much about what the program does, but it does tell us the important flags we might want to use when launching it.

Running the Job on a Compute Node

Create a submission file, requesting one task on a single node, then launch it.

[netid@dev-amd20 ~]$ nano serial-job.sh
[netid@dev-amd20 ~]$ cat serial-job.sh
#!/usr/bin/env bash
#SBATCH -J solo-job
#SBATCH -N 1
#SBATCH -n 1

# Load the computing environment we need
module load GCC/10.2.0 OpenMPI/4.0.5 Python/3.8.6

# Execute the task
amdahl
[netid@dev-amd20 ~]$ sbatch serial-job.sh

As before, use the Slurm status commands to check whether your job is running and when it ends:

[netid@dev-amd20 ~]$ squeue -u netid

Use ls to locate the output file. The -t flag sorts in reverse-chronological order: newest first. What was the output?

Read the Job Output

The cluster output should be written to a file in the folder you launched the job from. For example,

[netid@dev-amd20 ~]$ ls -t
slurm-347087.out  serial-job.sh  amdahl  README.md  LICENSE.txt
[netid@dev-amd20 ~]$ cat slurm-347087.out
Doing 30.000 seconds of 'work' on 1 processor,
which should take 30.000 seconds with 0.850 parallel proportion of the workload.

  Hello, World! I am process 0 of 1 on amr-252. I will do all the serial 'work' for 4.500 seconds.
  Hello, World! I am process 0 of 1 on amr-252. I will do parallel 'work' for 25.500 seconds.

Total execution time (according to rank 0): 30.033 seconds

As we saw before, two of the amdahl program flags set the amount of work and the proportion of that work that is parallel in nature. Based on the output, we can see that the code uses a default of 30 seconds of work that is 85% parallel. The program ran for just over 30 seconds in total, and if we run the numbers, it is true that 15% of it was marked ‘serial’ and 85% was ‘parallel’.

Since we only gave the job one CPU, this job wasn’t really parallel: the same processor performed the ‘serial’ work for 4.5 seconds, then the ‘parallel’ part for 25.5 seconds, and no time was saved. The cluster can do better, if we ask.

Running the Parallel Job

The amdahl program uses the Message Passing Interface (MPI) for parallelism – this is a common tool on HPC systems.

What is MPI?

The Message Passing Interface is a set of tools which allow multiple tasks running simultaneously to communicate with each other. Typically, a single executable is run multiple times, possibly on different machines, and the MPI tools are used to inform each instance of the executable about its sibling processes, and which instance it is. MPI also provides tools to allow communication between instances to coordinate work, exchange information about elements of the task, or to transfer data. An MPI instance typically has its own copy of all the local variables.

While MPI-aware executables can generally be run as stand-alone programs, in order for them to run in parallel they must use an MPI run-time environment, which is a specific implementation of the MPI standard. To activate the MPI environment, the program should be started via a command such as mpiexec (or mpirun, or srun, etc. depending on the MPI run-time you need to use), which will ensure that the appropriate run-time support for parallelism is included.

MPI Runtime Arguments

On their own, commands such as mpiexec can take many arguments specifying how many machines will participate in the execution, and you might need these if you would like to run an MPI program on your own (for example, on your laptop). In the context of a queuing system, however, it is frequently the case that MPI run-time will obtain the necessary parameters from the queuing system, by examining the environment variables set when the job is launched.

Let’s modify the job script to request more cores and use the MPI run-time.

[netid@dev-amd20 ~]$ cp serial-job.sh parallel-job.sh
[netid@dev-amd20 ~]$ nano parallel-job.sh
[netid@dev-amd20 ~]$ cat parallel-job.sh
#!/usr/bin/env bash
#SBATCH -J parallel-job
#SBATCH -N 1
#SBATCH -n 4

# Load the computing environment we need
module load GCC/10.2.0 OpenMPI/4.0.5 Python/3.8.6

# Execute the task
srun amdahl

Then submit your job. Note that the submission command has not really changed from how we submitted the serial job: all the parallel settings are in the batch file rather than the command line.

[netid@dev-amd20 ~]$ sbatch parallel-job.sh

As before, use the status commands to check when your job runs.

[netid@dev-amd20 ~]$ ls -t
slurm-347178.out  parallel-job.sh  slurm-347087.out  serial-job.sh  amdahl  README.md  LICENSE.txt
[netid@dev-amd20 ~]$ cat slurm-347178.out
Doing 30.000 seconds of 'work' on 4 processors,
which should take 10.875 seconds with 0.850 parallel proportion of the workload.

  Hello, World! I am process 0 of 4 on amr-252. I will do all the serial 'work' for 4.500 seconds.
  Hello, World! I am process 2 of 4 on amr-252. I will do parallel 'work' for 6.375 seconds.
  Hello, World! I am process 1 of 4 on amr-252. I will do parallel 'work' for 6.375 seconds.
  Hello, World! I am process 3 of 4 on amr-252. I will do parallel 'work' for 6.375 seconds.
  Hello, World! I am process 0 of 4 on amr-252. I will do parallel 'work' for 6.375 seconds.

Total execution time (according to rank 0): 10.888 seconds

Is it 4× faster?

The parallel job received 4× more processors than the serial job: does that mean it finished in ¼ the time?

Solution

The parallel job did take less time: 11 seconds is better than 30! But it is only a 2.7× improvement, not 4×.

Look at the job output:

  • While “process 0” did serial work, processes 1 through 3 did their parallel work.
  • While process 0 caught up on its parallel work, the rest did nothing at all.

Process 0 always has to finish its serial task before it can start on the parallel work. This sets a lower limit on the amount of time this job will take, no matter how many cores you throw at it.

This is the basic principle behind Amdahl’s Law, which is one way of predicting improvements in execution time for a fixed workload that can be subdivided and run in parallel to some extent.

How Much Does Parallel Execution Improve Performance?

In theory, dividing up a perfectly parallel calculation among n MPI processes should produce a decrease in total run time by a factor of n. As we have just seen, real programs need some time for the MPI processes to communicate and coordinate, and some types of calculations can’t be subdivided: they only run effectively on a single CPU.

Additionally, if the MPI processes operate on different physical CPUs in the computer, or across multiple compute nodes, even more time is required for communication than it takes when all processes operate on a single CPU.

In practice, it’s common to evaluate the parallelism of an MPI program by

Since “more is better” – improvement is easier to interpret from increases in some quantity than decreases – comparisons are made using the speedup factor S, which is calculated as the single-CPU execution time divided by the multi-CPU execution time. For a perfectly parallel program, a plot of the speedup S versus the number of CPUs n would give a straight line, S = n.

Let’s run one more job, so we can see how close to a straight line our amdahl code gets.

[netid@dev-amd20 ~]$ nano parallel-job.sh
[netid@dev-amd20 ~]$ cat parallel-job.sh
#!/usr/bin/env bash
#SBATCH -J parallel-job
#SBATCH -N 1
#SBATCH -n 8

# Load the computing environment we need
module load GCC/10.2.0 OpenMPI/4.0.5 Python/3.8.6

# Execute the task
srun amdahl

Then submit your job. Note that the submission command has not really changed from how we submitted the serial job: all the parallel settings are in the batch file rather than the command line.

[netid@dev-amd20 ~]$ sbatch parallel-job.sh

As before, use the status commands to check when your job runs.

[netid@dev-amd20 ~]$ ls -t
slurm-347271.out  parallel-job.sh  slurm-347178.out  slurm-347087.out  serial-job.sh  amdahl  README.md  LICENSE.txt
[netid@dev-amd20 ~]$ cat slurm-347178.out
which should take 7.688 seconds with 0.850 parallel proportion of the workload.

  Hello, World! I am process 4 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 0 of 8 on amr-252. I will do all the serial 'work' for 4.500 seconds.
  Hello, World! I am process 2 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 1 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 3 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 5 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 6 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 7 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.
  Hello, World! I am process 0 of 8 on amr-252. I will do parallel 'work' for 3.188 seconds.

Total execution time (according to rank 0): 7.697 seconds

Non-Linear Output

When we ran the job with 4 parallel workers, the serial job wrote its output first, then the parallel processes wrote their output, with process 0 coming in first and last.

With 8 workers, this is not the case: since the parallel workers take less time than the serial work, it is hard to say which process will write its output first, except that it will not be process 0!

Now, let’s summarize the amount of time it took each job to run:

Number of CPUs Runtime (sec)
1 30.033
4 10.888
8 7.697

Then, use the first row to compute speedups S, using Python as a command-line calculator:

[netid@dev-amd20 ~]$ for n in 30.033 10.888 7.697; do python3 -c "print(30.033 / $n)"; done
Number of CPUs Speedup Ideal
1 1.0 1
4 2.75 4
8 3.90 8

The job output files have been telling us that this program is performing 85% of its work in parallel, leaving 15% to run in serial. This seems reasonably high, but our quick study of speedup shows that in order to get a 4× speedup, we have to use 8 or 9 processors in parallel. In real programs, the speedup factor is influenced by

Using Amdahl’s Law, you can prove that with this program, it is impossible to reach 8× speedup, no matter how many processors you have on hand. Details of that analysis, with results to back it up, are left for the next class in the HPC Carpentry workshop, HPC Workflows.

In an HPC environment, we try to reduce the execution time for all types of jobs, and MPI is an extremely common way to combine dozens, hundreds, or thousands of CPUs into solving a single problem. To learn more about parallelization, see the parallel novice lesson lesson.

Key Points

  • Parallel programming allows applications to take advantage of parallel hardware.

  • The queuing system facilitates executing parallel tasks.

  • Performance improvements from parallel execution do not scale linearly.


Using resources effectively

Overview

Teaching: 10 min
Exercises: 20 min
Questions
  • How can I review past jobs?

  • How can I use this knowledge to create a more accurate submission script?

Objectives
  • Look up job statistics.

  • Make more accurate resource requests in job scripts based on data describing past performance.

We’ve touched on all the skills you need to interact with an HPC cluster: logging in over SSH, loading software modules, submitting parallel jobs, and finding the output. Let’s learn about estimating resource usage and why it might matter.

Estimating Required Resources Using the Scheduler

Although we covered requesting resources from the scheduler earlier with the π code, how do we know what type of resources the software will need in the first place, and its demand for each? In general, unless the software documentation or user testimonials provide some idea, we won’t know how much memory or compute time a program will need.

Read the Documentation

Most HPC facilities maintain documentation as a wiki, a website, or a document sent along when you register for an account. Take a look at these resources, and search for the software you plan to use: somebody might have written up guidance for getting the most out of it.

A convenient way of figuring out the resources required for a job to run successfully is to submit a test job, and then ask the scheduler about its impact using sacct -u netid. You can use this knowledge to set up the next job with a closer estimate of its load on the system. A good general rule is to ask the scheduler for 20% to 30% more time and memory than you expect the job to need. This ensures that minor fluctuations in run time or memory use will not result in your job being cancelled by the scheduler. Keep in mind that if you ask for too much, your job may not run even though enough resources are available, because the scheduler will be waiting for other people’s jobs to finish and free up the resources needed to match what you asked for.

Stats

Since we already submitted amdahl to run on the cluster, we can query the scheduler to see how long our job took and what resources were used. We will use sacct -u netid to get statistics about parallel-job.sh.

[netid@dev-amd20 ~]$ sacct -u netid
JobID           JobName  Partition    Account  AllocCPUS      State ExitCode 
------------ ---------- ---------- ---------- ---------- ---------- -------- 
29097625       solo-job general-s+    general          1  COMPLETED      0:0 
29097625.ba+      batch               general          1  COMPLETED      0:0 
29097625.ex+     extern               general          1  COMPLETED      0:0 
29099234     parallel-+ general-s+    general          4  COMPLETED      0:0 
29099234.ba+      batch               general          4  COMPLETED      0:0 
29099234.ex+     extern               general          4  COMPLETED      0:0 
29099234.0       amdahl               general          4  COMPLETED      0:0 
29099267     parallel-+ general-s+    general          8  COMPLETED      0:0 
29099267.ba+      batch               general          8  COMPLETED      0:0 
29099267.ex+     extern               general          8  COMPLETED      0:0 
29099267.0       amdahl               general          8  COMPLETED      0:0 

This shows all the jobs we ran today (note that there are multiple entries per job). To get info about a specific job (for example, 347087), we change command slightly.

[netid@dev-amd20 ~]$ sacct -u netid -l -j 347087

It will show a lot of info; in fact, every single piece of info collected on your job by the scheduler will show up here. It may be useful to redirect this information to less to make it easier to view (use the left and right arrow keys to scroll through fields).

[netid@dev-amd20 ~]$ sacct -u netid -l -j 347087 | less -S

Discussion

This view can help compare the amount of time requested and actually used, duration of residence in the queue before launching, and memory footprint on the compute node(s).

How accurate were our estimates?

Improving Resource Requests

From the job history, we see that amdahl jobs finished executing in at most a few minutes, once dispatched. The time estimate we provided in the job script was far too long! This makes it harder for the queuing system to accurately estimate when resources will become free for other jobs. Practically, this means that the queuing system waits to dispatch our amdahl job until the full requested time slot opens, instead of “sneaking it in” a much shorter window where the job could actually finish. Specifying the expected runtime in the submission script more accurately will help alleviate cluster congestion and may get your job dispatched earlier.

Narrow the Time Estimate

Edit parallel_job.sh to set a better time estimate. How close can you get?

Hint: use -t.

Solution

The following line tells Slurm that our job should finish within 2 minutes:

#SBATCH -t 00:02:00

Key Points

  • Accurate job scripts help the queuing system efficiently allocate shared resources.


Using shared resources responsibly

Overview

Teaching: 15 min
Exercises: 5 min
Questions
  • How can I be a responsible user?

  • How can I protect my data?

  • How can I best get large amounts of data off an HPC system?

Objectives
  • Describe how the actions of a single user can affect the experience of others on a shared system.

  • Discuss the behaviour of a considerate shared system citizen.

  • Explain the importance of backing up critical data.

  • Describe the challenges with transferring large amounts of data off HPC systems.

  • Convert many files to a single archive file using tar.

One of the major differences between using remote HPC resources and your own system (e.g. your laptop) is that remote resources are shared. How many users the resource is shared between at any one time varies from system to system, but it is unlikely you will ever be the only user logged into or using such a system.

The widespread usage of scheduling systems where users submit jobs on HPC resources is a natural outcome of the shared nature of these resources. There are other things you, as an upstanding member of the community, need to consider.

Be Kind to the Login Nodes

The login node is often busy managing all of the logged in users, creating and editing files and compiling software. If the machine runs out of memory or processing capacity, it will become very slow and unusable for everyone. While the machine is meant to be used, be sure to do so responsibly – in ways that will not adversely impact other users’ experience.

Login nodes are always the right place to launch jobs. Cluster policies vary, but they may also be used for proving out workflows, and in some cases, may host advanced cluster-specific debugging or development tools. The cluster may have modules that need to be loaded, possibly in a certain order, and paths or library versions that differ from your laptop, and doing an interactive test run on the head node is a quick and reliable way to discover and fix these issues.

Login Nodes Are a Shared Resource

Remember, the login node is shared with all other users and your actions could cause issues for other people. Think carefully about the potential implications of issuing commands that may use large amounts of resource.

Unsure? Ask your friendly systems administrator (“sysadmin”) if the thing you’re contemplating is suitable for the login node, or if there’s another mechanism to get it done safely.

You can always use the commands top and ps ux to list the processes that are running on the login node along with the amount of CPU and memory they are using. If this check reveals that the login node is somewhat idle, you can safely use it for your non-routine processing task. If something goes wrong – the process takes too long, or doesn’t respond – you can use the kill command along with the PID to terminate the process.

Login Node Etiquette

Which of these commands would be a routine task to run on the login node?

  1. python physics_sim.py
  2. make
  3. create_directories.sh
  4. molecular_dynamics_2
  5. tar -xzf R-3.3.0.tar.gz

Solution

Building software, creating directories, and unpacking software are common and acceptable > tasks for the login node: options #2 (make), #3 (mkdir), and #5 (tar) are probably OK. Note that script names do not always reflect their contents: before launching #3, please less create_directories.sh and make sure it’s not a Trojan horse.

Running resource-intensive applications is frowned upon. Unless you are sure it will not affect other users, do not run jobs like #1 (python) or #4 (custom MD code). If you’re unsure, ask your friendly sysadmin for advice.

If you experience performance issues with a login node you should report it to the system staff (usually via the helpdesk) for them to investigate.

Test Before Scaling

Remember that you are generally charged for usage on shared systems. A simple mistake in a job script can end up costing a large amount of resource budget. Imagine a job script with a mistake that makes it sit doing nothing for 24 hours on 1000 cores or one where you have requested 2000 cores by mistake and only use 100 of them! This problem can be compounded when people write scripts that automate job submission (for example, when running the same calculation or analysis over lots of different parameters or files). When this happens it hurts both you (as you waste lots of charged resource) and other users (who are blocked from accessing the idle compute nodes). On very busy resources you may wait many days in a queue for your job to fail within 10 seconds of starting due to a trivial typo in the job script. This is extremely frustrating!

Most systems provide dedicated resources for testing that have short wait times to help you avoid this issue.

Test Job Submission Scripts That Use Large Amounts of Resources

Before submitting a large run of jobs, submit one as a test first to make sure everything works as expected.

Before submitting a very large or very long job submit a short truncated test to ensure that the job starts as expected.

Have a Backup Plan

Although many HPC systems keep backups, it does not always cover all the file systems available and may only be for disaster recovery purposes (i.e. for restoring the whole file system if lost rather than an individual file or directory you have deleted by mistake). Protecting critical data from corruption or deletion is primarily your responsibility: keep your own backup copies.

Version control systems (such as Git) often have free, cloud-based offerings (e.g., GitHub and GitLab) that are generally used for storing source code. Even if you are not writing your own programs, these can be very useful for storing job scripts, analysis scripts and small input files.

If you are building software, you may have a large amount of source code that you compile to build your executable. Since this data can generally be recovered by re-downloading the code, or re-running the checkout operation from the source code repository, this data is also less critical to protect.

For larger amounts of data, especially important results from your runs, which may be irreplaceable, you should make sure you have a robust system in place for taking copies of data off the HPC system wherever possible to backed-up storage. Tools such as rsync can be very useful for this.

Your access to the shared HPC system will generally be time-limited so you should ensure you have a plan for transferring your data off the system before your access finishes. The time required to transfer large amounts of data should not be underestimated and you should ensure you have planned for this early enough (ideally, before you even start using the system for your research).

In all these cases, the helpdesk of the system you are using should be able to provide useful guidance on your options for data transfer for the volumes of data you will be using.

Your Data Is Your Responsibility

Make sure you understand what the backup policy is on the file systems on the system you are using and what implications this has for your work if you lose your data on the system. Plan your backups of critical data and how you will transfer data off the system throughout the project.

Transferring Data

As mentioned above, many users run into the challenge of transferring large amounts of data off HPC systems at some point (this is more often in transferring data off than onto systems but the advice below applies in either case). Data transfer speed may be limited by many different factors so the best data transfer mechanism to use depends on the type of data being transferred and where the data is going.

The components between your data’s source and destination have varying levels of performance, and in particular, may have different capabilities with respect to bandwidth and latency.

Bandwidth is generally the raw amount of data per unit time a device is capable of transmitting or receiving. It’s a common and generally well-understood metric.

Latency is a bit more subtle. For data transfers, it may be thought of as the amount of time it takes to get data out of storage and into a transmittable form. Latency issues are the reason it’s advisable to execute data transfers by moving a small number of large files, rather than the converse.

Some of the key components and their associated issues are:

As mentioned above, if you have related data that consists of a large number of small files it is strongly recommended to pack the files into a larger archive file for long term storage and transfer. A single large file makes more efficient use of the file system and is easier to move, copy and transfer because significantly fewer metadata operations are required. Archive files can be created using tools like tar and zip. We have already met tar when we talked about data transfer earlier.

/hpc-intro/Schematic%20of%20network%20bandwidth
Schematic diagram of bandwidth and latency for disk and network I/O. Each of the components on the figure is connected by a blue line of width proportional to the interface bandwidth. The small mazes at the link points illustrate the latency of the link, with more tortuous mazes indicating higher latency.

Consider the Best Way to Transfer Data

If you are transferring large amounts of data you will need to think about what may affect your transfer performance. It is always useful to run some tests that you can use to extrapolate how long it will take to transfer your data.

Say you have a “data” folder containing 10,000 or so files, a healthy mix of small and large ASCII and binary data. Which of the following would be the best way to transfer them to ICER HPCC?

  1. [user@laptop ~]$ scp -r data netid@hpcc.msu.edu:~/
    
  2. [user@laptop ~]$ rsync -ra data netid@hpcc.msu.edu:~/
    
  3. [user@laptop ~]$ rsync -raz data netid@hpcc.msu.edu:~/
    
  4. [user@laptop ~]$ tar -cvf data.tar data
    [user@laptop ~]$ rsync -raz data.tar netid@hpcc.msu.edu:~/
    
  5. [user@laptop ~]$ tar -cvzf data.tar.gz data
    [user@laptop ~]$ rsync -ra data.tar.gz netid@hpcc.msu.edu:~/
    

Solution

  1. scp will recursively copy the directory. This works, but without compression.
  2. rsync -ra works like scp -r, but preserves file information like creation times. This is marginally better.
  3. rsync -raz adds compression, which will save some bandwidth. If you have a strong CPU at both ends of the line, and you’re on a slow network, this is a good choice.
  4. This command first uses tar to merge everything into a single file, then rsync -z to transfer it with compression. With this large number of files, metadata overhead can hamper your transfer, so this is a good idea.
  5. This command uses tar -z to compress the archive, then rsync to transfer it. This may perform similarly to #4, but in most cases (for large datasets), it’s the best combination of high throughput and low latency (making the most of your time and network connection).

Key Points

  • Be careful how you use the login node.

  • Your data on the system is your responsibility.

  • Plan and test large data transfers.

  • It is often best to convert many files to a single archive file before transferring.