HPC Expansion Rounds

The next expansion round has not yet been scheduled but is expected to occur some time during 2017.  Please contact Research Computing Services (RCS) if you would like to be notified when more information about the round is available.

Information about previous rounds can be found on the HPC Expansion page.

Shared Research Computing Policy Advisory Committee

The Shared Research Computing Policy Advisory Committee (SRCPAC) oversees the operation of existing HPC clusters through faculty-led subcommittees.  SRCPAC is also responsible for the governance of the Shared Research Computing Facility (see below) as well as making policy recommendations for shared research computing at the Univeristy.

Shared Research Computing Facility

Columbia's centrally-managed High Performance Computing (HPC) resources on the Morningside campus are housed in the Shared Research Computing Facility (SRCF), which consists of a dedicated portion of the university data center. A project to upgrade the electrical infrastructure of the data center was completed in Summer 2013*.

In addition, ongoing green data center initiatives**--involving energy-use measurement and monitoring, server consolidation and the purchase of high-density computing equipment--focus on maximizing computing capacity (i.e  number of computer operations and amount of data storage) per watt, thereby increasing energy efficiency. These efforts will help Columbia to meet its local and national commitments to reduce the university's carbon footprint.

The SRCF is governed by the Shared Research Computing Policy Advisory Committee (SRCPAC). Research Computing Services (RCS), working with many other groups, coordinates Columbia University Information Technology (CUIT)'s HPC Cluster service.

*The Shared Research Computing Facility project is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and matching funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010.

**Green data center initiatives are supported by the New York State Energy Research and Development Authority (NYSERDA) Cost-Sharing Agreement No. ST11145-1, Columbia University Advanced Concepts Data Center, awarded April 1, 2009 in response to NYSERDA Program Opportunity Notice (PON) 1206, Data Center and Server Efficiency.

Habanero Shared HPC Cluster

Habanero, launched in November 2016, is the most recent cluster to enter production.  It was a joint purchase of 32 research groups/departments and CUIT, and was facilitated by SRCPAC and the Office of the Executive Vice President for Research.  It also includes an education tier to support computational research classes and other training efforts that was jointly funded by Arts & Sciences and the Fu Foundation School of Engineering and Applied Science.

Habanero consists of:

222 nodes with a total of 5328 cores (24 cores per node):

  • 208 HP ProLiant XL170r Gen9 nodes with dual Intel E5-2650v4 Processors (2.2 GHz):
    • 176 standard memory nodes (128 GB)
    • 32 high memory nodes (512 GB)
  • 14 HP DL380 Gen9 nodes with dual Intel E5-2650v4 Processors (2.2 GHz) and NVIDIA K80 GPU (2 per node) supplying ~140,000 GPU cores
  • 407 TB DDN GS7K GPFS storage
  • EDR Infiniband (FDR to storage)
  • Red Hat Enterprise Linux 7
  • Slurm job scheduler.

Yeti Shared HPC Cluster

Yeti is a joint purchase and partnership among 24 research groups/departments, CUIT and the Office of the Executive Vice President for Research, facilitated by SRCPAC, and is also supported in part by Arts & Sciences, the Fu Foundation School of Engineering and Applied Science, and New York State.***

Individuals who do not have access to Yeti as part of the 24 purchaser research groups/departments are accommodated through a fee-based renter service as well as a free tier with lower priority.

Yeti consists of:

167 nodes with a total of 2672 cores (16 cores per node):

  • 61 HP SL230 Gen8 nodes with Dual Intel E5-2650v2 Processors (2.6 GHz):
    • 10 standard memory nodes (64 GB)
    • 3 high memory nodes (256 GB)
    • 48 FDR Infiniband nodes (64 GB)
  • 5 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650v2 Processors (2.6 GHz) and NVIDIA K40 GPU (2 per node) supplying 28,800 GPU cores
  • 97 HP SL230 Gen8 nodes with Dual Intel E5-2650L Processors (1.8 GHz):
    • 38 standard memory nodes (64 GB)
    • 8 medium memory nodes (128 GB)
    • 35 high memory nodes (256 GB)
    • 16 FDR Infiniband nodes (64 GB)
  • 4 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650L Processors (1.8 GHz) and NVIDIA K20 GPU (2 per node) supplying ~20,000 GPU cores
  • 160 TB NetApp FAS6220 scratch storage
  • Red Hat Enterprise Linux 6
  • Torque/Moab job scheduler

***The new shared cluster includes support from New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, awarded April 15, 2010.

Hotfoot Shared HPC Cluster

Hotfoot, now retired, was launched in 2009 as a partnership among: the departments of Astronomy & Astrophysics, Statistics, and Economics plus other groups represented in the Social Science Computing Committee (SSCC); the Stockwell LaboratoryCUIT; and the Office of the Executive Vice President for Research; and Arts & Sciences.

Columbia faculty, research staff and students used Hotfoot to pursue research in diverse areas.

In later years the cluster ran the Torque/Moab resource manager/scheduler software and consisted of 32 nodes which provided 384 cores for running jobs. The system also included a 72 TB array of scratch storage.

XSEDE HPC Access

All Columbia faculty members and postdoctoral researchers who are eligible principal investigators (PIs) can contact RCS to inquire about joining our XSEDE national HPC test allocation as a first step to obtaining their own allocation. See http://www.xsede.org/ for more information.