Palmetto is Clemson University’s primary high-performance computing (HPC) resource; heavily utilized by researchers, students, faculty, and staff from a broad range of disciplines.
Currently, Palmetto is comprised of 2021 compute nodes (totalling 23072 CPU cores), and features:
- 2021 compute nodes, totalling 23072 cores
- 386 nodes equipped with NVIDIA Tesla GPUs: 280 nodes with NVIDIA K20 GPUs (2 per node), 106 nodes with NVIDIA K40 GPUs (2 per node)
- 4 nodes with Intel Phi co-processors (2 per node)
- 6 large memory nodes (5 with 505GB, 1 with 2TB), 262 nodes with 128GB of memory
- 100GB of personal space (backed up daily for 42 days) for each user
- “unlimited” scratch storage for temporary files
- 10 Gbps Ethernet, 10 Gbps Myrinet and 56Gbps Infiniband networks
- maximum run time for a single task limited to 72 hours (Infiniband nodes) or 168 hours (Myrinet nodes)
- ranked 4th among the public academic institutions in the US on Top500 list (155 on Top500)
- benchmarked at 814.4 TFlops (17,372 cores from Infiniband part of Palmetto)
The cluster is divided into several “phases”;
the basic hardware configuration (node count, cores, RAM)
is given below. For more detailed and up-to-date information,
you can view the file
/etc/hardware-table after logging in.
About 1400 nodes (Phases 0-6 of) the cluster consist of older hardware with 10 Gbps Myrinet interconnect.
Phases 7-15 of the cluster consist of newer hardware with 56 Gbpbs Infiniband interconnect, and additionally 10 Gbps Ethernet for phases 9-15.
There are about 380 nodes (phases 7a-8b, 9-11a, 12-15) on Palmetto equipped with NVIDIA Tesla GPUs (M2075, K20m, M2070q and K40m).
Intel Xeon Phi accelerators
4 nodes (phase 11b) are equipped with Intel Phi co-processors (2 per node).
Phase 0 consists of 6 “bigmem” machines with large core count and RAM (505 GB/2 TB).
Various options for storing data (on temporary and permanent basis) are available to researchers using Palmetto:
||100 GB per user||Backed-up nightly, permanent storage space accessible from all nodes|
||233 TB shared by all users||Not backed up, temporary work space accessible from all nodes, OrangeFS Parallel File Sytem|
||160 TB shared by all users||Not backed up, temporary work space accessible from all nodes, XFS|
||129 TB shared by all users||Not backed up, temporary work space accessible from all nodes, ZFS|
||Varies between nodes (99GB-800GB)||Per-node temporary work space, accessible only for the lifetime of job|
Additional high-performance and backed-up storage may be purchased for group usage. Please visit http://citi.clemson.edu/infrastructure for details and pricing.
The Palmetto cluster uses the Portable Batch Scheduling system (PBS) to schedule jobs.
Palmetto cluster operates in a condominium model which allows faculty to purchase immediate access to compute nodes on the cluster. More information can be found in the Owner’s Guide.