Running Gromacs
Gromacs is already installed on the Talon cluster.
There is also a modulefile that has been created to easily load the program.
The modulefile is under /share/apps/ModuleFiles
. To make this available you will need to use the following command:
module use /share/apps/ModuleFiles
Once you do this and type module avail
you should see another list of modules like so
----- /share/apps/ModuleFiles/ -----
dl_poly/4.09-mpi gromacs/5.1.4 sofi2d sofi3d
to make it so this group of module files is available every time you login you can use the following command
echo "module use /share/apps/ModuleFiles" >> ~/.bashrc
You should only ever need to do this step once.
At this point you should be able to load the module with module load gromacs
and get started with the submission script.23-25
If you are trying to run one the GPUs you will need to specify that in your submission script. This involves adding the #SBATCH --gres=gpu:1
line as seen below. It will also require the use of this executable and flag gmx mdrun -nb gpu
. You can see an example of this and an example of running on the CPU on lines 23-25 of the example submission script below.
#!/bin/bash
#####Number of nodes
#SBATCH -nodes=1
#SBATCH --gres=gpu:1
#####Number of tasks per node
#SBATCH --ntasks-per-node=1
#SBATCH --mem-per-cpu=2G
#SBATCH --job-name=Gromacs
#SBATCH -o slurm_run_%j_output.txt
#SBATCH -e slurm_run_%j_error.txt
#Changes working directory to the location where you submitted this script from
printf 'Changing to the working directory: %s\n\n' "$SLURM_SUBMIT_DIR"
cd $SLURM_SUBMIT_DIR
#Load Necessary Modules to Run the Program
printf 'Loading gromacs module\n'
module load gromacs/5.1.4
#Determine the job host names and write a hosts file
srun -ntasks=$SLURM_NTASKS hostname | sort -u > ${SLURM_JOB_ID}.hosts
## If you are trying to run gromacs on a GPU, comment the line below this and uncomment the secondline otherwise uncomment the first line and comment the second
mpirun -np $SLURM_NTASKS -machinefile ${SLURM_JOB_ID}.hosts gmx_mpi mdrun -s test.tpr -maxh 0.80
#mpirun -np $SLURM_NTASKS -machinefile ${SLURM_JOB_ID}.hosts gmx mdrun -nb gpu -s test.tpr -maxh 0.80
#Remove .hosts file
rm ${SLURM_JOB_ID}.hosts