Building and running SOFI2D
SOFI2D is used for processing three dimensional seismic data.
Building SOFI2D
Clone the git repository where SOFI2D is located
git clone https://git.scc.kit.edu/GPIAG-Software/SOFI2D.git
Load Openmpi 3 GCC
module load openmpi3/gcc
Unload the default C compiler
module unload gcc
Change to the SOFI2D src directory
cd SOFI2D/src
Run the
make all
command and watch out for errors.
If you don't get any errors, SOFI2D has been successfully built.
- NOTE: There may be some warnings about variables being set but not used. This should be fine.
Running SOFI2D on a Cluster
When running in parallel SOFI will decompose the domain you are working on and split it into smaller chunks for each processor to work on. The number of chunks will depend on the number of nodes and tasks you set in the submission file (see below) and in the SOFI2D/par/in_and_out/sofi2D.json
parameter file. It is recommended to split the domain into relatively square sections.
The parameters file created by default will not necessarily work with the settings you want to use to submit the job to the queue. You will need to edit it before attempting to submit to the cluster. The default parameters file can be found at SOFI2D/par/in_and_out/sofi2D.json
. Under Domain Decomposition
, the variables NPROCX
and NPROCY
must have their variables changed such that NPROCX * NPROCY * NPROCZ = Number of processors requested
.
- For the example submission script below
NPROCX * NPROCY = 8 = (4 tasks per node) * (2 nodes)
SOFI2D/par/in_and_out/sofi2D.json:
#-----------------------------------------------------------------
# JSON PARAMETER FILE FOR SOFI2D
#-----------------------------------------------------------------
# description: example of json input file
# description/name of the model: homogeneous full space (hh.c)
#
{
"Domain Decomposition" : "comment",
"NPROCX" : "2",
"NPROCY" : "4",
}
Sbatch submission script sofi2DSubmission.sh
:
#!/bin/bash
#SBATCH --partition=talon
#SBATCH --ntasks-per-node=36 #Number of tasks
#SBATCH -N2 #Number of nodes to run the tasks
#SBATCH --mem=2G #Requested amount of memory per node
#SBATCH --job-name=SOFI2DTest #Name of the job
#SBATCH --chdir=./ #Working directory
#SBATCH -o /home/first.last/SOFI2D-Release/Data.txt #Log file location
#SBATCH -e /home/first.last/SOFI2D-Release/Errors.txt #Error file location.
module purge
module load slurm
module load openmpi3/gcc
cd /path/to/SOFI2D-Release/par
srun -n $SLURM_NTASKS hostname > $SLURM_JOB_ID.hosts
mpirun -n $SLURM_NTASKS -machinefile $SLURM_JOB_ID.hosts ../bin/sofi2D ./in_and_out/sofi2D.json
After you've created your submission script and modified the sofi2D.json parameters file you can submit your job using sbatch.
sbatch sofi2DSubmission.sh
See the Slurm tutorial for instructions on how to submit jobs.