Exploring Remote Resources
Overview
Teaching: 25 min
Exercises: 10 minQuestions
How does my local computer compare to the remote systems?
How does the login node compare to the compute nodes?
Are all compute nodes alike?
Objectives
Survey system resources using
nproc
,free
, and the queuing systemCompare & contrast resources on the local machine, login node, and worker nodes
Learn about the various filesystems on the cluster using
df
Find out
who
else is logged inAssess the number of idle and occupied nodes
Look Around the Remote System
If you have not already connected to ICER HPCC, please do so now. Take a look at your home directory on the remote system:
[netid@dev-amd20 ~]$ ls
What’s different between your machine and the remote?
Open a second terminal window on your local computer and run the
ls
command (without logging in to ICER HPCC). What differences do you see?Solution
You would likely see something more like this:
[user@laptop ~]$ ls
Applications Documents Library Music Public Desktop Downloads Movies Pictures
The remote computer’s home directory shares almost nothing in common with the local computer: they are completely separate systems!
Most high-performance computing systems run the Linux operating system, which
is built around the UNIX Filesystem Hierarchy Standard. Instead of
having a separate root for each hard drive or storage medium, all files and
devices are anchored to the “root” directory, which is /
:
[netid@dev-amd20 ~]$ ls /
bin etc lib64 proc sbin sys var
boot mnt root scratch tmp working
dev lib opt run srv usr
The “/mnt/home” subdirectory is the one where we generally want to keep all of our files. Other folders on a UNIX OS contain system files and change as you install new software or upgrade your OS.
Using HPC filesystems
On HPC systems, you have a number of places where you can store your files. These differ in both the amount of space allocated and whether or not they are backed up.
- Home – often a network filesystem, data stored here is available throughout the HPC system, and often backed up periodically. Files stored here are typically slower to access, the data is actually stored on another computer and is being transmitted and made available over the network!
- Access using
/mnt/home/netid
- Scratch – typically faster than the networked Home directory, but not usually backed up, and should not be used for long term storage.
- Access using
/mnt/gs21/scratch/netid
- Research – similar to Home, but useful for collaboration. Multiple users of a single group have access, and the space is managed by a PI.
- Access using
/mnt/research/group
Quotas
Each of file space is subject to limitations to ensure that all resources are shared appropriately:
- Home: 50GB and 1 million files.
- Research: 50GB and 1 million files
- Scratch: 50TB and 1 million files. FILES UNMODIFIED AFTER 45 DAYS WILL BE DELETED.
To check your usage, use the
quota
command:[netid@dev-amd20 ~]$ quota
home directory: Space Space Space Space Files Files Files Files Quota Used Available % Used Quota Used Available % Used ----------------------------------------------------------------------------------------------- /mnt/home/netid 50G 32G 18G 64% 1048576 432525 616051 59%
Nodes
Individual computers that compose a cluster are typically called nodes (although you will also hear people call them servers, computers and machines). On a cluster, there are different types of nodes for different types of tasks. The node where you are right now is called the login node, head node, landing pad, or submit node. A login node serves as an access point to the cluster.
As a gateway, the login node should not be used for time-consuming or resource-intensive tasks. You should be alert to this, and check with your site’s operators or documentation for details of what is and isn’t allowed. It is well suited for uploading and downloading files, setting up software, and running tests. Generally speaking, in these lessons, we will avoid running jobs on the login node.
Who else is logged in to the login node?
[netid@dev-amd20 ~]$ who
This may show only your user ID, but there are likely several other people (including fellow learners) connected right now.
Dedicated Transfer Nodes
If you want to transfer larger amounts of data to or from the cluster, some systems offer dedicated nodes for data transfers only. The motivation for this lies in the fact that larger data transfers should not obstruct operation of the login node for anybody else. Check with your cluster’s documentation or its support team if such a transfer node is available. As a rule of thumb, consider all transfers of a volume larger than 500 MB to 1 GB as large. But these numbers change, e.g., depending on the network connection of yourself and of your cluster or other factors.
The real work on a cluster gets done by the compute (or worker) nodes. compute nodes come in many shapes and sizes, but generally are dedicated to long or hard tasks that require a lot of computational resources.
All interaction with the compute nodes is handled by a specialized piece of software called a scheduler (the scheduler used in this lesson is called Slurm). We’ll learn more about how to use the scheduler to submit jobs next, but for now, it can also tell us more information about the compute nodes.
For example, we can view all of the compute nodes by running the command
sinfo
.
[netid@dev-amd20 ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
scavenger up 7-00:00:00 3 inval css-122,lac-115,skl-138
scavenger up 7-00:00:00 11 drain* amr-066,lac-[157,164,239,312,314,341,353],nch-000,skl-[154,160]
scavenger up 7-00:00:00 13 down* csm-022,csn-[001,025],csp-[025-026],css-[033,074,097,102,117],lac-[133,148,390]
scavenger up 7-00:00:00 27 comp acm-[037,039,041-042,061,064-065],amr-215,lac-[054,056,060-061,063,066,088-089,092,102,124-127,129,217,250,334-335]
scavenger up 7-00:00:00 2 drng lac-[044,355]
scavenger up 7-00:00:00 4 drain lac-[169,199,212],skl-028
scavenger up 7-00:00:00 1 resv nch-001
scavenger up 7-00:00:00 689 mix acm-[000-008,015-036,038,040,043-053,056-060,062-063,066-071],amr-[000-003,005,008,010-044,047-050,052-055,057-061,070,074,078,080-081,087,090-098,100-107,109-115,117-120,125-127,130,134-148,150-156,158-169,171-173,175,177-179,181-182,184-185,187-193,196-214,216-237,240-244,246-252],csn-[017,032],css-[049,055,059,063,066,089,106,118,123],lac-[000-021,023-025,028-031,035-037,043,046,048,050-052,055,057-059,062,064-065,067-078,082-085,087,090-091,093-101,103-111,114,116,118-122,130-132,134-147,149-156,158-163,165,167-168,170,173,180-189,192-194,200-209,213-216,218-225,229-231,233-238,240-248,251-252,254-261,282-283,285,288-305,307-311,313,315,317-333,342,344,347,350-352,361-362,365-369,388,392,400,408-415,419,421-425,427-428,442-443],nal-[000-007,009-010],nif-[000-004],nvf-[006-007,009,011,013-015,017,020],nvl-[001,003,007],qml-002,skl-[005,010-012,014-016,021,023,026-027,029-030,034-052,054-071,073-076,078-079,082-083,085-095,097-114,116-137,139,142,148-153,156-159,161-163,165-166],vim-[000-002]
scavenger up 7-00:00:00 327 alloc acm-[009-014,054-055],amr-[004,006-007,009,045-046,051,056,062-065,067-069,071-073,075-077,079,082-086,088-089,099,108,116,121-124,128-129,131-133,149,157,170,174,176,180,183,186,194-195,253],csm-[001-005,017-018,020-021],csn-[002-011,013-016,018-024,026-031,033-037,039],csp-[006,016-020],css-[002-003,007-010,016-020,023,032,034-036,038-039,042-044,047,050,052,056-057,060-062,064,072,075,083,088,091,093-095,099,101,103,111-114,116,119,121,124,126-127],lac-[026-027,033,038,040-042,053,079-081,086,112-113,117,123,171-172,174-179,190-191,195-198,210-211,228,232,253,277-281,284,286-287,306,316,336-340,343,345-346,348-349,354,356-360,363-364,372,374-376,378-387,391,393-399,401-407,416-417,420,426,429-441,444-445],nal-008,nvf-[000-005,008,010,012,016,018-019],nvl-[000,002,004-006],qml-[000,003],skl-[000-003,006-009,013,017-020,022,024-025,031-033,053,072,077,080-081,084,096,115,140-141,143-147,155,164,167]
scavenger up 7-00:00:00 2 idle nif-005,skl-004
ondemand up 7-00:00:00 4 down* csn-[001,025],csp-025,css-097
ondemand up 7-00:00:00 9 mix csn-[017,032],css-[049,055,059,063,066,089,118]
ondemand up 7-00:00:00 74 alloc csm-001,csn-[002-011,013-016,018-024,026-031,033-036],csp-[006,016-018,020],css-[008-010,016-019,023,032,034-036,038-039,042-044,047,050,052,056-057,060-062,064,075,083,088,093-095,099,121,124,126],qml-000
general-short up 4:00:00 2 inval lac-115,skl-138
general-short up 4:00:00 11 drain* amr-066,lac-[157,164,239,312,314,341,353],nch-000,skl-[154,160]
general-short up 4:00:00 3 down* lac-[133,148,390]
general-short up 4:00:00 27 comp acm-[037,039,041-042,061,064-065],amr-215,lac-[054,056,060-061,063,066,088-089,092,102,124-127,129,217,250,334-335]
general-short up 4:00:00 2 drng lac-[044,355]
general-short up 4:00:00 4 drain lac-[169,199,212],skl-028
general-short up 4:00:00 1 resv nch-001
general-short up 4:00:00 671 mix acm-[000-008,015-036,038,040,043-053,056-060,062-063,066-069,071],amr-[000-003,005,008,010-044,047-050,052-055,057-061,070,074,078,080-081,087,090-098,100-107,109-115,117-120,125-127,130,134-148,150-156,158-169,171-173,175,177-179,181-182,184-185,187-193,196-214,216-237,240-244,246-252],lac-[000-021,023-025,028-031,035-037,043,046,048,050-052,055,057-059,062,064-065,067-078,082-085,087,090-091,093-101,103-111,114,116,118-122,130-132,134-147,149-156,158-163,165,167-168,170,173,180-189,192-194,200-209,213-216,218-225,229-231,233-238,240-248,251-252,254-261,282-283,285,288-305,307-311,313,315,317-333,342,344,347,350-352,361-362,365-369,388,392,400,408-415,419,421-425,427-428,442-443],nal-[000-003,009-010],nif-[001-004],nvf-[006-007,009,011,013-015,017,020],nvl-[001,003,007],skl-[005,010-012,014-016,021,023,026-027,029-030,034-052,054-071,073-076,078-079,082-083,085-095,097-114,116-137,139,142,148-153,156-159,161-163,165-166],vim-[000-002]
general-short up 4:00:00 226 alloc acm-[009-014,054-055],amr-[004,006-007,009,045-046,051,056,062-065,067-069,071-073,075-077,079,082-086,088-089,099,108,116,121-124,128-129,131-133,149,157,170,174,176,180,183,186,194-195,253],lac-[026-027,033,038,040-042,053,079-081,086,112-113,117,123,171-172,174-179,190-191,195-198,210-211,228,232,253,277-281,284,286-287,306,316,336-340,343,345-346,348-349,354,356-360,363-364,372,374-376,378-387,391,393-399,401-407,416-417,420,426,429-441,444-445],nal-008,nvf-[000-005,008,010,012,016,018-019],nvl-[000,002,004-006],skl-[000-003,006-009,013,017-020,022,024-025,031-033,053,072,077,080-081,084,096,115,140-141,143-147,155,164,167]
general-short up 4:00:00 2 idle nif-005,skl-004
general-long up 7-00:00:00 1 drain* lac-353
general-long up 7-00:00:00 1 down* lac-390
general-long up 7-00:00:00 6 comp acm-[037,039,041-042],amr-215,lac-217
general-long up 7-00:00:00 2 drng lac-[044,355]
general-long up 7-00:00:00 1 drain skl-028
general-long up 7-00:00:00 199 mix acm-[017-036,038,040,043-047],amr-[184-185,187-193,196-214,216-237,246-252],lac-[043,078,209,225,230-231,233-235,246-248,252,282-283,300-301,388,392,408-415,419,422-425,427-428,442-443],skl-[026-027,029-030,034-052,054-071,073-076,078-079,082-083,085-095,097-100,102-112,162-163,165-166]
general-long up 7-00:00:00 90 alloc amr-[186,194-195,253],lac-[038,040-042,123,228,232,253,277-281,284,306,336-339,354,356-360,363-364,372,374-376,378-387,391,393-399,401-407,416-417,420,426,429-441,444-445],skl-[031-033,072,077,080-081,084,096,164,167]
general-long-bigmem up 7-00:00:00 3 comp acm-[061,064-065]
general-long-bigmem up 7-00:00:00 8 mix acm-[058,060,062-063,066-067],amr-103,vim-001
general-long-bigmem up 7-00:00:00 5 alloc skl-[143-147]
general-long-gpu up 7-00:00:00 1 drain lac-199
general-long-gpu up 7-00:00:00 16 mix lac-[030,087,137,143,288-293,344],nal-[000-001,010],nvf-020,nvl-007
general-long-gpu up 7-00:00:00 9 alloc lac-[195-198,348],nvf-[018-019],nvl-[005-006]
A lot of the nodes are busy running work for other users: we are not alone here!
There are also specialized machines used for managing disk storage, user authentication, and other infrastructure-related tasks. Although we do not typically logon to or interact with these machines directly, they enable a number of key features like ensuring our user account and files are available throughout the HPC system.
What’s in a Node?
All of the nodes in an HPC system have the same components as your own laptop or desktop: CPUs (sometimes also called processors or cores), memory (or RAM), and disk space. CPUs are a computer’s tool for actually running programs and calculations. Information about a current task is stored in the computer’s memory. Disk refers to all storage that can be accessed like a file system. This is generally storage that can hold data permanently, i.e. data is still there even if the computer has been restarted. While this storage can be local (a hard drive installed inside of it), it is more common for nodes to connect to a shared, remote fileserver or cluster of servers.
Explore Your Computer
Try to find out the number of CPUs and amount of memory available on your personal computer.
Note that, if you’re logged in to the remote computer cluster, you need to log out first. To do so, type
Ctrl+d
orexit
:[netid@dev-amd20 ~]$ exit [user@laptop ~]$
Solution
There are several ways to do this. Most operating systems have a graphical system monitor, like the Windows Task Manager. More detailed information can be found on the command line:
- Run system utilities
[user@laptop ~]$ nproc --all [user@laptop ~]$ free -m
- Read from
/proc
[user@laptop ~]$ cat /proc/cpuinfo [user@laptop ~]$ cat /proc/meminfo
- Run system monitor
[user@laptop ~]$ htop
Explore the Login Node
Now compare the resources of your computer with those of the login node.
Solution
[user@laptop ~]$ ssh netid@hpcc.msu.edu [netid@dev-amd20 ~]$ nproc --all [netid@dev-amd20 ~]$ free -m
You can get more information about the processors using
lscpu
, and a lot of detail about the memory by reading the file/proc/meminfo
:[netid@dev-amd20 ~]$ less /proc/meminfo
You can also explore the available filesystems using
df
to show disk free space. The-h
flag renders the sizes in a human-friendly format, i.e., GB instead of B. The type flag-T
shows what kind of filesystem each resource is.[netid@dev-amd20 ~]$ df -Th
Different results from
df
- The local filesystems (ext, tmp, xfs, zfs) will depend on whether you’re on the same login node (or compute node, later on).
- Networked filesystems (beegfs, cifs, gpfs, nfs, pvfs) will be similar – but may include netid, depending on how it is mounted.
Shared Filesystems
This is an important point to remember: files saved on one node (computer) are often available everywhere on the cluster!
Explore a Worker Node
Finally, let’s look at the resources available on the worker nodes where your jobs will actually run. Try running this command to see the name, CPUs and memory available on the worker nodes:
[netid@dev-amd20 ~]$ sinfo -n amr-252 -o "%n %c %m" | column -t
Compare Your Computer, the Login Node and the Compute Node
Compare your laptop’s number of processors and memory with the numbers you see on the cluster login node and compute node. What implications do you think the differences might have on running your research work on the different systems and nodes?
Solution
Compute nodes are usually built with processors that have higher core-counts than the login node or personal computers in order to support highly parallel tasks. Compute nodes usually also have substantially more memory (RAM) installed than a personal computer. More cores tends to help jobs that depend on some work that is easy to perform in parallel, and more, faster memory is key for large or complex numerical tasks.
Differences Between Nodes
Many HPC clusters have a variety of nodes optimized for particular workloads. Some nodes may have larger amount of memory, or specialized resources such as Graphics Processing Units (GPUs or “video cards”).
With all of this in mind, we will now cover how to talk to the cluster’s scheduler, and use it to start running our scripts and programs!
Key Points
An HPC system is a set of networked machines.
HPC systems typically provide login nodes and a set of compute nodes.
The resources found on independent (worker) nodes can vary in volume and type (amount of RAM, processor architecture, availability of network mounted filesystems, etc.).
Files saved on shared storage are available on all nodes.
The login node is a shared machine: be considerate of other users.