This workshop introduces the research computing ecosystem at Princeton: the computing clusters (Nobel, Adroit, Della, Stellar, Tiger, and Traverse), the storage systems available, and the data visualization machine (Tigressdata). After an overview of the different systems and the sorts of tasks each is geared toward, the course gives users a hands-on introduction to technical topics including: how to connect to the clusters; how to manage file storage; how to access or install additional software; and how to launch jobs through our scheduling software (SLURM).
Participants will also learn the basic civics of working on Princeton’s shared systems, including guidance on how to request the right amount of memory and computing power (including choosing CPUs vs GPUs) and rules of thumb to avoid delays or interruptions in computing jobs.
Workshop format: Interactive presentation, with hands-on activities.
Target audience: *All users* Princeton's research computing systems should know this content (think of it as “Driver’s Ed” for supercomputers). Some of that content is Princeton-specific, so this workshop is suitable not only for those new to working on shared Linux clusters but also for those with experience at other institutions whose systems and policies might differ from Princeton’s.
Knowledge prerequisites: A working facility with the Linux command line at or above the level of PICSciE’s “Intro to the Linux Command Line” mini-course is *essential* for this workshop. THERE WILL BE NO REVIEW OF COMMAND-LINE BASICS DURING THIS WORKSHOP!
Hardware/software prerequisites: For this workshop, users must have an account on the Adroit cluster, and they should confirm that they can SSH into Adroit *at least 48 hours beforehand*. Details on all of the above can be found in this guide. Users should follow the guidelines in the first two sections of that linked guide (
https://researchcomputing.princeton.edu/learn/workshops-live-training/r…), 'Overarching requirements' and 'Workshops that use Adroit.
Learning objectives: Attendees will come away with the basic skills needed to connect to a research computing cluster, navigate its environment and file system, install and manage their software environment, and run programs through the SLURM scheduler.