WRF Build - Version 3.8.1


WRF is a complicated program. This is a Hodor install guide for WRF (not WPS) for WRF version 3.8, with CHEM 3.8, and KPP 2.1.

NOTE: This guide assumes you are using the BASH shell.


STEP ONE-A - GCC

NOTE: WRF will not compile using GCC version 6 or higher. It will only work with version 4.8.5. The GCC6 module loads automatically when you log in. If you unload it, the system will default to GCC 4.8.5.

First type the following module command to make sure the correct gcc version is running.

module rm gcc

you can check the version you are currently using with the following command

gcc -v

The version of gcc that we want to use is 4.8.5

If you have the correct version the last line of the output should look something like this:

gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC)

STEP ONE-B - Setting Up Environment Variables

First, set these environment variables. It assumes that you are installing WRFV3 at the root of your home directory.

export EM_CORE=1
export NMM_CORE=0
export WRF_CHEM=1
export WRF_KPP=1
export NETCDF=$HOME/netcdf
export YACC='/share/apps/byacc/bin/yacc -d'
export FLEX=/usr/bin
export FLEX_LIB_DIR=/usr/lib64
export KPP_HOME=$HOME/WRFV3/chem/KPP/kpp/kpp-2.1
export PATH=$NETCDF/bin:$KPP_HOME/bin:$PATH
export SED=/usr/bin/sed
export WRFIO_NCD_LARGE_FILE_SUPPORT=1
export WRF_SRC_ROOT_DIR=$HOME/WRFV3
export CC=gcc
export CXX=g++
export FC=gfortran
export F77=gfortran
export FFLAGS=–m64
export FCFLAGS=–m64

STEP TWO - Build NETCDF libraries for C and Fortran.

NOTE: Make sure you compile this with the same compiler as you are going to build WRF with

The versions of NETCDF we will be using are:

  • 4.4.0 for the C library
  • 4.4.3 for the Fortran library

Download and extract the netcdf-4.4.0.tar.gz (https://github.com/Unidata/netcdf-c/releases/tag/v4.4.0) source for NETCDF C libraries to your home directory using the following commands.

cd ~/
wget https://github.com/Unidata/netcdf-c/archive/v4.4.0.tar.gz
tar -xfv v4.4.0.tar.gz

Use this method to build the C library:

cd netcdf-c-4.4.0
./configure --prefix=${NETCDF} --disable-dap
make check install

You should see the “Congratulations! You have successfully installed netCDF!” message

Next move back to your home directory, download and extract the netcdf-fortran-4.4.3.tar.gz (https://github.com/Unidata/netcdf-fortran/releases/tag/v4.4.3) source for NETCDF Fortran libraries using the following commands.

cd ~/
wget https://github.com/Unidata/netcdf-fortran/archive/v4.4.3.tar.gz
tar -xfv v4.4.3.tar.gz

Use this method to build the Fortran library:

Setup the environment variables with the following commands

NCDIR=${NETCDF}
NFDIR=${NETCDF}
CC=/usr/bin/gcc
FC=/usr/bin/gfortran
export LD_LIBRARY_PATH=${NCDIR}/lib:${LD_LIBRARY_PATH}

Start the configure and install with the following commands

cd netcdf-fortran-4.4.3
./configure --prefix=${NFDIR}
make check
make install

This is Very similar to: "Building with shared libraries" https://www.unidata.ucar.edu/software/netcdf/docs/building_netcdf_fortran.html There are some differences in the environment variables used though.


STEP THREE - Get WRF Source Code

Copy the WRFV3.8.TAR.gz file to your home directory using the following command:

cp /share/apps/wrf-3.8-libs/tar-files/WRFV3.8.TAR.gz ~/WRFV3.8.TAR.gz

If not already in your home directory, change to your home directory using the following command:

cd ~/

Use the following command to uncompress the TAR file:

tar -xvf WRFV3.8.TAR.gz

STEP FOUR - Get Chem/KPP WRF Source Code

Change to the directory just created using the following command:

cd $HOME/WRFV3

Copy the WRFV3-Chem-3.8.TAR.gz file to the current directory:

cp  /share/apps/wrf-3.8-libs/tar-files/WRFV3-Chem-3.8.TAR.gz ./WRFV3-Chem-3.8.TAR.gz

Use the following command to uncompress the TAR file:

tar -xvf WRFV3-Chem-3.8.TAR.gz

STEP FIVE - Prepare the KPP code for compilation

Change to the KPP source directory using the following command:

cd chem/KPP/kpp/kpp-2.1/src

Run the flex command to create the lex.yy.c file using the following command:

/usr/bin/flex scan.l

Edit the lex.yy.c file using VIM or nano, placing the following lines of code at the top of the file, and then save the file:

############
#define INITIAL 0
#define CMD_STATE 1
#define INC_STATE 2
#define MOD_STATE 3
#define INT_STATE 4
#define PRM_STATE 5
#define DSP_STATE 6
#define SSP_STATE 7
#define INI_STATE 8
#define EQN_STATE 9
#define EQNTAG_STATE 10
#define RATE_STATE 11
#define LMP_STATE 12
#define CR_IGNORE 13
#define SC_IGNORE 14
#define ATM_STATE 15
#define LKT_STATE 16
#define INL_STATE 17
#define MNI_STATE 18
#define TPT_STATE 19
#define USE_STATE 20
#define COMMENT 21
#define COMMENT2 22
#define EQN_ID 23
#define INL_CODE 24
############

STEP SIX - Prepare for Parallel code for compilation

First, type the following module command to load MPICH for GCC.

module load mpich/ge/gcc/64/3.2rc2

To verify it loaded, type:

module list

If loaded, "mpich/ge/gcc/64/3.2rc2" should be seen in the last commands output.

Next Return to the WRF home directory by typing:

cd $HOME/WRFV3

Then, run configure program like so:

./configure

When prompted select option 34 for parallel GNU and 1 for basic nesting


STEP SEVEN - Compile KPP

Once configured, run the following command to compile KPP:

./compile 2>&1 | tee compile_kpp.log

Watch for errors while it is compiling

This will save the compiler output to a log file while compiling so you can examine it later should the compile fail.


STEP EIGHT - Compile WRF with CHEM

Assuming you only want to compile the "em_real" portion of WRF, type the following command (this process with take about an hour):

./compile em_real 2>&1 | tee compile_wrf.log

Watch for errors while it is compiling


STEP NINE - "Clean" if you have to

Should the program fail to compile, run the clean program in order to clean out all the build files.

./clean

Then go through the build logs to try to find the problem. Once you have identified and fixed the build problem, start the build process at the beginning of STEP 4.


Build complete

The build of WRF should now be complete assuming there were no errors in the compiling.


Submitting a job

Assuming you have all the data files you want to test and have run the real.exe you should be ready to start the submission process to have your program run on the cluster.

To submit a job get all of the files you are going to be testing and move them to the WRFV3/test/em_real/ folder.

Run the real.exe program to complete the final file prep which allows the actual WRF program to run and check to make sure it ran successfully. Create a script called realSubmit.sh in your WRFV3/test/em_real/ and enter the following script:

#!/bin/bash
#####Number of nodes
#SBATCH -N1
#####Number of tasks per node
#SBATCH --ntasks-per-node=1
#SBATCH --job-name=WRFV3_Real
#SBATCH --workdir=./
#SBATCH -o slurm_run_%j_output.txt
#SBATCH -e slurm_run_%j_error.tx

#Changes working directory to your home directory cd $SLURM_SUBMIT_DIR
printf 'Changing to the working directory: %s\n\n' "$SLURM_SUBMIT_DIR"
cd $SLURM_SUBMIT_DIR

#Load Necessary Modules
printf 'Loading slurm and mpich modules\n'
module purge
module load slurm/16.05.8
module load mpich/ge/gcc/64/3.2rc2

#Load Local NetCDF
printf 'Loading Local NetCDF\n'
export NETCDF=$HOME/netcdf
export LD_LIBRARY_PATH=$NETCDF/lib:$LD_LIBRARY_PATH
export PATH=$NETCDF/bin:$PATH
export WRF_SRC_ROOT_DIR=$HOME/WRFV3
printf '\tNetCDF:\t%s\n' "$NETCDF"
printf '\tPath:\t%s\n\n' "$PATH"

#Determine the job host names and write a hosts file
srun -n $SLURM_NTASKS hostname | sort -u > $SLURM_JOB_ID.hosts

#Run WRF using MPIRUN
mpirun -np $SLURM_NTASKS -machinefile $SLURM_JOB_ID.hosts ./real.exe

#Output the information in the rsl.error file
tail rsl.error.0000

Now you should be able to submit the job to the queue using:

sbatch ./realSubmit.sh

When it is finished running in the tail message there should be something that says SUCCESS

Once the real.exe program has run successfully, create a shell script in your WRFV3/test/em_real/ folder called wrfSubmit.sh and use the following script:

#!/bin/bash
#####Number of nodes
#SBATCH -N2
#####Number of tasks per node
#SBATCH --ntasks-per-node=8
#SBATCH --job-name=WRFV3_WRF
#SBATCH --workdir=./
#SBATCH -o slurm_run_%j_output.txt
#SBATCH -e slurm_run_%j_error.tx

#Changes working directory to your home directory cd $SLURM_SUBMIT_DIR
printf 'Changing to the working directory: %s\n\n' "$SLURM_SUBMIT_DIR"
cd $SLURM_SUBMIT_DIR

#Load Necessary Modules
printf 'Loading slurm and mpich modules\n'
module purge
module load slurm/16.05.8
module load mpich/ge/gcc/64/3.2rc2

#Load Local NetCDF
printf 'Loading Local NetCDF\n'
export NETCDF=$HOME/netcdf
export LD_LIBRARY_PATH=$NETCDF/lib:$LD_LIBRARY_PATH
export PATH=$NETCDF/bin:$PATH
export WRF_SRC_ROOT_DIR=$HOME/WRFV3
printf '\tNetCDF:\t%s\n' "$NETCDF"
printf '\tPath:\t%s\n\n' "$PATH"

#Determine the job host names and write a hosts file
srun -n $SLURM_NTASKS hostname | sort -u > $SLURM_JOB_ID.hosts

#Run WRF using MPIRUN
mpirun -np $SLURM_NTASKS -machinefile $SLURM_JOB_ID.hosts ./wrf.exe

#Output the information in the rsl.error file
tail rsl.error.0000

Now you should be able to submit the job to the queue using:

sbatch ./wrfSubmit.sh

Once this has finished running your output files should be in the WRFV3/test/em_real folder

results matching ""

    No results matching ""