Palmetto is Clemson University's primary high-performance computing (HPC) resource; heavily utilized by researchers, students, faculty, and staff from a broad range of disciplines.
Currently, Palmetto is comprised of 2115 compute nodes (totalling 32600 CPU cores), and features:
- 2115 compute nodes, totalling 32600 cores
- 639 nodes are equipped with 2x NVIDIA Tesla GPUs, with a total of 1278 GPUs in the cluster; out of these, 34 nodes each have 2x NVIDIA Tesla A100 GPUs
- 3 nodes with Intel Phi co-processors (2 per node)
- 15 large-memory nodes (with 0.75 TB - 1.5 TB of memory); in addition, 604 nodes have at least 128 GB of RAM
- 100 GB of personal space (backed up daily for 42 days) for each user
- 2.2 PB of scratch storage space for computing and a burst-buffer
- 10 and 25 Gbps Ethernet, and 56 and 100 Gbps Infiniband networks
- benchmarked at 1.4 PFlops (44,016 cores from Infiniband part of Palmetto)
- the cluster is 100% battery-backed
The cluster is divided into several "phases";
the basic hardware configuration (node count, cores, RAM)
is given below. For more detailed and up-to-date information,
you can view the file
/etc/hardware-table after logging in.
Phases 0 through 6 of the cluster consist of older hardware with 10 Gbps Ethernet interconnect. Maximum run time for a single task is limited to 168 hours.
Phases 7-27 of the cluster consist of newer hardware with 56 Gbps Infiniband interconnect (Phases 7-17) and 100 Gbps Infiniband interconnect (Phases 18-27). Maximum run time for a single task is limited to 72 hours.
There are 639 nodes (phases 8a-11a, 12-19a, 20, 27) on Palmetto equipped with NVIDIA Tesla GPUs (K20m, K40m, P100, V100, A100).
Intel Xeon Phi accelerators
3 nodes (phase 11b) are equipped with Intel Phi co-processors (2 per node).
Phase 0 consists of 6 "bigmem" machines with large core count and RAM (750 GB to 1.5 TB).
Various options for storing data (on temporary and permanent basis) are available to researchers using Palmetto:
||100 GB per user||Backed-up nightly, permanent storage space accessible from all nodes|
||2PB shared by all users||Not backed up, temporary work space accessible from all nodes, BeeGFS Parallel File Sytem|
||190 TB shared by all users||Not backed up, temporary work space accessible from all nodes, ZFS|
||Varies between nodes (99GB-800GB)||Per-node temporary work space, accessible only for the lifetime of job|
Additional high-performance and backed-up storage may be purchased for group usage. Please visit the Owner's Guide.
The Palmetto cluster uses the Portable Batch Scheduling system (PBS) to schedule jobs.
Palmetto cluster operates in a condominium model which allows faculty to purchase immediate access to compute nodes on the cluster. More information can be found in the Owner's Guide.
Acknowledging Palmetto Cluster
We would appreciate if all publications that include results generated using the Palmetto cluster include a short statement in the Acknowledgment section. As an example, the acknowledgment may look like this:
Clemson University is acknowledged for generous allotment of compute time on Palmetto cluster.