Shared Research Computing Facility

Columbia's centrally-managed High Performance Computing (HPC) resources on the Morningside campus are housed in the Shared Research Computing Facility (SRCF), which consists of a dedicated portion of the university data center. A project to upgrade the electrical infrastructure of the data center was completed in Summer 2013*.

In addition, ongoing green data center initiatives**--involving energy-use measurement and monitoring, server consolidation and the purchase of high-density computing equipment--focus on maximizing computing capacity (i.e  number of computer operations and amount of data storage) per watt, thereby increasing energy efficiency. These efforts will help Columbia to meet its local and national commitments to reduce the university's carbon footprint.

The SRCF is governed by the Shared Research Computing Policy Advisory Committee (SRCPAC). Research Computing Services (RCS), working with many other groups, coordinates Columbia University Information Technology (CUIT)'s HPC Cluster service.

*The Shared Research Computing Facility project is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and matching funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010.

**Green data center initiatives are supported by the New York State Energy Research and Development Authority (NYSERDA) Cost-Sharing Agreement No. ST11145-1, Columbia University Advanced Concepts Data Center, awarded April 1, 2009 in response to NYSERDA Program Opportunity Notice (PON) 1206, Data Center and Server Efficiency.

Yeti Shared HPC Cluster

Yeti is a joint purchase and partnership among ten research groups/departments, CUIT and the Office of the Executive Vice President for Research, facilitated by SRCPAC, and is also supported in part by Arts & Sciences, the Fu Foundation School of Engineering and Applied Science, and New York State.***

Individuals who do not have access to Yeti as part of the ten purchaser research groups/departments will be accommodated through a fee-based HPC renter service for an individual for one year, which we expect to be available by the end of 2013, and a free tier of service with lower priority that will be available following the renter service.

Yeti consists of:

  • 1616 cores, 101 nodes (16 cores per node):
    • 97 HP SL230 Gen8 nodes with Dual Intel E5-2650L Processors (1.8 GHz):
      • 38 standard memory nodes (64 GB)
      • 8 medium memory nodes (128 GB)
      • 35 high memory nodes (256 GB)
      • 16 FDR Infiniband nodes (64 GB)
    • 4 HP SL250 Gen8 nodes (64 GB) with Dual Intel E5-2650L Processors (1.8 GHz) and NVIDIA K20 GPU (2 per node) supplying ~20,000 GPU cores
  • 100 TB NetApp FAS6220 scratch storage
  • Red Hat Enterprise Linux (RHEL6) and Torque/Moab

***The new shared cluster includes support from New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, awarded April 15, 2010.

Hotfoot Shared HPC Cluster

Hotfoot, launched in 2009, is a partnership among: the departments of Astronomy & Astrophysics, Statistics, and Economics plus other groups represented in the Social Science Computing Committee (SSCC); the Stockwell LaboratoryCUIT; the Office of the Executive Vice President for Research; and Arts & Sciences.

Columbia faculty, research staff and students have used Hotfoot to pursue research in diverse areas.

The cluster runs the Torque/Moab resource manager/scheduler software and consists of 62 nodes which provide 616 cores for running jobs. The system also includes a 72 TB array of scratch storage, shared between the departments and used by researchers for the temporary storage of their input data sets and job results.

Researchers currently using Hotfoot submit data analysis jobs using applications and languages including Matlab, R, Java, and C++. The system also supports Message Passing Interface (MPI) for parallel programming.

XSEDE HPC Access

All Columbia faculty members and postdoctoral researchers who are eligible principal investigators (PIs) can contact RCS to inquire about joining our XSEDE national HPC allocation. After joining, PIs can request that Columbia graduate or undergraduate students also be given accounts under the allocation. See http://www.xsede.org/ for more information.