Bebop

Computing system

Quick Facts

  • 672 public nodes (originally 1024 before retiring KNL nodes)
  • 128GB DDR4 of memory on each node
  • 36 cores (Intel Broadwell)
  • Omni-Path Fabric Interconnect

Available Queues

Bebop has several queues defined. Use the -q option with qsub to select a queue. The default queue is bdwall.

 

Bebop Partition Name Description Number of Nodes CPU Type Cores Per Node Memory Per Node Local Scratch Disk
bdwall All Broadwell Nodes 672 Intel Xeon E5-2695v4 36 128GB DDR4 15 GB or 4 TB

File Storage

There are no physical disks in most of the Bebop nodes themselves, and as such the OS running on every node runs in a diskless environment. Users that do take advantage of local scratch space have the option of using a scratch space on the node’s memory (15GB located at /scratch). A subset of our Broadwell nodes instead have a 4TB /scratch available. The scratch space is essentially a RAM disk and consumes an amount of memory, so this should be taken into account if you are running a large job that requires a substantial amount of memory.

Please see our detailed description of the file storage used in LCRC here.

Architecture

Bebop runs on Intel Broadwell processors. Broadwell nodes can take advantage of using the AVX2 and AVX instruction sets.

Bebop is using an Intel Omni-Path interconnect for its network. This fact comes into play when considering MPI programs that would use Infiniband library as a means for communication. Omni-Path has its own communication method that only works with their gear (called PSM2), allowing for higher performance than you would see from using ibverbs or PSM. This means that you should recompile your code and use one of the MPI’s on Bebop which supports PSM2.

Running Jobs on Bebop

For detailed information on how to run jobs on Bebop, you can follow our documentation by clicking here: Running Jobs on Bebop.

With an eye towards future alignment with the ALCF, LCRC has adopted PBS Pro for the Bebop cluster.