The HBS research Grid's collection of compute nodes and servers are all coordinated by the software system https://www.ibm.com/support/knowledgecenter/en/SSETD4_9.1.3/lsf_welcome.html, or Load Sharing Facility. This overlay on a compute cluster is also known as the scheduler. Other computer clusters may use LSF, or other common scheduler software like SLURM, Sun Grid Engine (SGE), or PBS/Torque.
Mostly transparent, this system of networked software listens to requests to launch programs via the application scripts on the NoMachine login nodes, through the Program Application Center (PAC), or via batch submission. The software then coordinates matching a compute node with the work that you need to perform, and makes those resources exclusively available to you for the duration. Through this process, your interactive GUI program session or background batch program are jobs that are scheduled on the system, alongside all other users.
As this is a shared resource with finite constraints, it is important to understand how it works, the limitations, and how to work appropriately. This will enable you to work most effectively and efficiently, while at the same ensuring that the resources are available for others to use when you don't need them.