Palmetto is Clemson University’s primary high-performance computing (HPC) resource; heavily utilized by researchers, students, faculty, and staff from a broad range of disciplines.
Currently, Palmetto is comprised of 2021 compute nodes (totalling 23072 CPU cores), and features:
- 2079 compute nodes, totalling 28832 cores
- 595 nodes are equipped with 2x NVIDIA Tesla GPUs, with a total of 1194 GPUs in the cluster; 103 nodes each have 2x NVIDIA Tesla V100 GPUs
- 4 nodes with Intel Phi co-processors (2 per node)
- 17 large-memory nodes (with 0.5 TB - 2.0 TB of memory); in addition, 480 nodes have at least 128 GB of RAM
- 100 GB of personal space (backed up daily for 42 days) for each user
- 726 TB of scratch storage space for computing and a burst-buffer
- 10 and 25 Gbps Ethernet, and 56 and 100 Gbps Infiniband networks
- ranked 9th among the public academic institutions in the US (and 392nd overall among all world-wide supercomputers) on Top500 list, as of November 2019
- benchmarked at 1.4 PFlops (44,016 cores from Infiniband part of Palmetto)
- the cluster is 100% battery-backed
The cluster is divided into several “phases”;
the basic hardware configuration (node count, cores, RAM)
is given below. For more detailed and up-to-date information,
you can view the file
/etc/hardware-table after logging in.
Phases 0 through 6 of the cluster consist of older hardware with 10 Gbps Ethernet interconnect. Maximum run time for a single task is limited to 168 hours.
Phases 7-19 of the cluster consist of newer hardware with 56 Gbps Infiniband interconnect (Phases 7-17) and 100 Gbps Infiniband interconnect (Phases 18-19). Maximum run time for a single task is limited to 72 hours.
There are 595 nodes (phases 7a-8b, 9-11a, 12-19a) on Palmetto equipped with NVIDIA Tesla GPUs (M2075, K20m, M2070q, K40m, P100, V100).
Intel Xeon Phi accelerators
4 nodes (phase 11b) are equipped with Intel Phi co-processors (2 per node).
Phase 0 consists of 6 “bigmem” machines with large core count and RAM (505 GB to 2 TB).
Various options for storing data (on temporary and permanent basis) are available to researchers using Palmetto:
||100 GB per user||Backed-up nightly, permanent storage space accessible from all nodes|
||233 TB shared by all users||Not backed up, temporary work space accessible from all nodes, BeeGFS Parallel File Sytem|
||160 TB shared by all users||Not backed up, temporary work space accessible from all nodes, XFS|
||129 TB shared by all users||Not backed up, temporary work space accessible from all nodes, ZFS|
||174 TB shared by all users||Not backed up, temporary work space accessible from all nodes, BeeGFS Parallel File Sytem|
||Varies between nodes (99GB-800GB)||Per-node temporary work space, accessible only for the lifetime of job|
Additional high-performance and backed-up storage may be purchased for group usage. Please visit http://citi.clemson.edu/infrastructure for details and pricing.
The Palmetto cluster uses the Portable Batch Scheduling system (PBS) to schedule jobs.
Palmetto cluster operates in a condominium model which allows faculty to purchase immediate access to compute nodes on the cluster. More information can be found in the Owner’s Guide.