High-Performance Computing (HPC)

closeup of back of IT network equipment

High-Performance Computing (HPC) is available in a number of locations throughout campus. Whether you wish to work on a department cluster, a UC centralized facility, or in the cloud, IT@UC RCS can assist you with your needs. We can also help with obtaining new clusters, co-location, and new cloud-based initiatives.

UC researchers and research assistants have access to the state of Ohio's powerful Ohio Supercomputer Center's (OSC) High Performance Computing clusters, software and expertise, as well as Amazon Web Services. IT@UC RCS facilitates access to these resources and assists researchers.

Local & National Computational Resources

The advanced research computing (ARC) initiative establishes a high-performance computing (HPC) infrastructure that supports and accelerates computational and data-enabled research, scholarship and education at the University of Cincinnati.

ARC is equipped with 50 teraFLOPS of peak CPU performance and 2 NVIDIA Tesla V100 GPU nodes (224 teraFLOPS deep learning peak performance) connected with high-performance 100 GB/s Omnipath (OPA) interconnect, a significant step forward in both bandwidth and latency.


  • 50 teraFLOPS of peak CPU performance
    • Intel Xeon Gold 6148 2.4G, 20C/40T, 192 GB RAM/node
    • Plans to increase it to 140 teraFLOPS peak CPU performance in the next year
  • 224 teraFLOPS deep learning peak performance
    • NVIDIA Tesla V100 32G Passive GPU
    • Plans to increase it to 896 teraFLOPS deep learning peak performance in the next year
  • ZFS Storage Node – 96TB raw storage
  • Omnipath HPC Networking infrastructure
    • Maximum Ominpath bandwidth between nodes = 100Gbps


  • OpenHPC environment
  • Warewulf cluster provisioning system and managed by the SLURM
  • Singularity containers
  • Developmental tools, including compilers, OpenMP, MPI, OpenMPI libraries for parallel code development, debuggers, and open source AI tools
  • FLEXlm being installed so that individual researchers can easily maintain and use their software resources
  • User login is based on UC/AD, so that user groups and easier access
  • ARC Cluster Report
  • ARC 6-month Status Report June 2019

ARC pilot HPC Cluster access (CPUs, GPUs, basic storage)

For the duration of the Advanced Research Computing initiative pilot (January 2019 – July 2020), access and usage of the cluster is provided at no cost on a first-come, first-served basis. Fair-share scheduling is utilized to fairly distribute the resources. If you have a specific need and deadline, please contact us and we will work with you to get your jobs done.

Faculty can immediately gain access to the HPC cluster pilot resources by filling out the ARC Computing Resources Request (you must login with your UC Credentials to access the form).

Cost: No cost

Contribute nodes to the cluster – priority access to your nodes plus access to additional shared resources

Faculty can use their HPC and research computing funding to contribute nodes to the central cluster. Priority access is given to the owner of the nodes, however, when not in use by the owner, the nodes can be shared with others. This is a good option for faculty who need to have full access to their nodes periodically and can take advantage of access to additional shared resources in the cluster. Using the central resource maximizes the amount of compute resources a faculty can purchase because the HPC infrastructure (networking, racks, head/management nodes, support) are provided at no cost. Contact: arc_info@uc.edu

Cost: Nodes contributed to the cluster must be consistent with current cluster hardware configurations. The ARC team can work with you to review your needs and provide an estimate for your purchase.

Any faculty member or research scientist at an academic institution in Ohio is eligible for an academic account at Ohio Supercomputer Center (OSC) to access High-Performance Computing (HPC) resources. These researchers/educators may request accounts for their students and collaborators. Commercial accounts are also available.

NOTE: All first-time PIs must submit a signature page (electronically or hard copy). Please send a copy of the signature page to the attention of the Allocations Committee, OSC, 1224 Kinnear Road, Columbus, Ohio, 43212.

Use the following links to the OSC website to request and apply for resources: 

OSC Help

The Extreme Science and Engineering Discovery Environment (XSEDE) is a single virtual system that scientists can use to interactively share computing resources, data and expertise. People around the world use these resources and services — things like supercomputers, collections of data and new tools — to improve our planet.




  • Learn about Bridges, a uniquely capable resource for empowering new research communities and bringing together HPC, AI and Big Data.

A national, distributed computing partnership for data-intensive research