Opportunity to Join New HPC Cluster

Orders were due May 9 and are now closed.  More information can be found here.

Shared Research Computing Facility

Columbia's centrally-managed High Performance Computing (HPC) resources on the Morningside campus are housed in the Shared Research Computing Facility (SRCF), which consists of a dedicated portion of the university data center. A project to upgrade the electrical infrastructure of the data center was completed in Summer 2013*.

In addition, ongoing green data center initiatives**--involving energy-use measurement and monitoring, server consolidation and the purchase of high-density computing equipment--focus on maximizing computing capacity (i.e  number of computer operations and amount of data storage) per watt, thereby increasing energy efficiency. These efforts will help Columbia to meet its local and national commitments to reduce the university's carbon footprint.

The SRCF is governed by the Shared Research Computing Policy Advisory Committee (SRCPAC). Research Computing Services (RCS), working with many other groups, coordinates Columbia University Information Technology (CUIT)'s HPC Cluster service.

*The Shared Research Computing Facility project is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and matching funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010.

**Green data center initiatives are supported by the New York State Energy Research and Development Authority (NYSERDA) Cost-Sharing Agreement No. ST11145-1, Columbia University Advanced Concepts Data Center, awarded April 1, 2009 in response to NYSERDA Program Opportunity Notice (PON) 1206, Data Center and Server Efficiency.

Yeti Shared HPC Cluster

Yeti is a joint purchase and partnership among 24 research groups/departments, CUIT and the Office of the Executive Vice President for Research, facilitated by SRCPAC, and is also supported in part by Arts & Sciences, the Fu Foundation School of Engineering and Applied Science, and New York State.***

Individuals who do not have access to Yeti as part of the 24 purchaser research groups/departments are accommodated through a fee-based renter service as well as a free tier with lower priority.

Yeti consists of:

  • 2672 cores, 167 nodes (16 cores per node):
    • 61 HP SL230 Gen8 nodes with Dual Intel E5-2650v2 Processors (2.6 GHz):
      • 10 standard memory nodes (64 GB)
      • 3 high memory nodes (256 GB)
      • 48 FDR Infiniband nodes (64 GB)
    • 5 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650v2 Processors (2.6 GHz) and NVIDIA K40 GPU (2 per node) supplying 28,800 GPU cores
    • 97 HP SL230 Gen8 nodes with Dual Intel E5-2650L Processors (1.8 GHz):
      • 38 standard memory nodes (64 GB)
      • 8 medium memory nodes (128 GB)
      • 35 high memory nodes (256 GB)
      • 16 FDR Infiniband nodes (64 GB)
    • 4 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650L Processors (1.8 GHz) and NVIDIA K20 GPU (2 per node) supplying ~20,000 GPU cores
    • 160 TB NetApp FAS6220 scratch storage
    • Red Hat Enterprise Linux (RHEL6) and Torque/Moab

    ***The new shared cluster includes support from New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, awarded April 15, 2010.

    Hotfoot Shared HPC Cluster

    Hotfoot, now retired, was launched in 2009 as a partnership among: the departments of Astronomy & Astrophysics, Statistics, and Economics plus other groups represented in the Social Science Computing Committee (SSCC); the Stockwell LaboratoryCUIT; and the Office of the Executive Vice President for Research; and Arts & Sciences.

    Columbia faculty, research staff and students used Hotfoot to pursue research in diverse areas.

    In later years the cluster ran the Torque/Moab resource manager/scheduler software and consisted of 32 nodes which provided 384 cores for running jobs. The system also included a 72 TB array of scratch storage, shared between the departments and used by researchers for the temporary storage of their input data sets and job results.

    Researchers used Hotfoot to submit data analysis jobs using applications and languages including Matlab, R, Java, and C++. The system also supported Message Passing Interface (MPI) for parallel programming.

    XSEDE HPC Access

    All Columbia faculty members and postdoctoral researchers who are eligible principal investigators (PIs) can contact RCS to inquire about joining our XSEDE national HPC test allocation as a first step to obtaining their own allocation. See http://www.xsede.org/ for more information.