Version 1 (modified by valva, 11 years ago) (diff)

--

The resources.wrf4g file sets up the running environment. A sample file may look like this:

WRF4G_VERSION="0.0.2"
WRF_VERSION="3.1_r83MPICH_SPRE"
WRF4G_BASEPATH="gsiftp://ce01.macc.unican.es:2812/oceano/gmeteo/WORK/MDM.UC/WRF"
WRF4G_INPUT="gsiftp://ce01.macc.unican.es:2812/oceano/gmeteo/WORK/MDM.UC/Data"
WRF4G_APPS="gsiftp://ce01.macc.unican.es:2812/oceano/gmeteo/WORK/MDM.UC/Apps"
JOB_TYPE="EELA_grid_job"
NUMBER_OF_NODES=1     # number of nodes/TOTAL processes
PROCESSES_PER_NODE=4  # number of threads per node


The following is a complete list of the available options:

WRF4G_VERSION
WRF4G version to use. A file WRF4G-[WRF4G_VERSION].tar.gz must exist under WRF4G_APPS.
WRF_VERSION
WRF4Gbin version to use. A file WRF4Gbin-[WRF_VERSION].tar.gz must exist under WRF4G_APPS.
WRF4G_BASEPATH
Under this path one must find the following hierarchy to find the domain files and to save the output/restart/wpsout data
WRF4G_BASEPATH/
+-- domains
-- experiments
-- [experiment_name]
-- [realization_name]
+-- output
+-- restart
|    -- wrfrst_d01_1990-01-01_12:00:00
-- wpsout
`
This path can be local or remote. Local paths look like /path/to/base/path or file:///path/to/base/path. Additionally, the following protocols are supported (in general, any protocol supported by vcp): GSIFTP as gsiftp://host[:port]/path/to/base/path, RSYNC as rsync://[user@]host/path/to/base/path
WRF4G_INPUT
This path can be used in wrf.input to set the global_path to access the input data. Supports the same access protocols as WRF4G_BASEPATH.
WRF4G_APPS
This is the path for the application binaries. Here you must find the files WRF4G-[WRF4G_VERSION].tar.gz and WRF4Gbin-[WRF_VERSION].tar.gz. Supports the same access protocols as WRF4G_BASEPATH.
JOB_TYPE
The type of job to be run. This option selects the submitter to be used. A valid wrf4g_submit.[JOB_TYPE] file must exists in the ui/scripts directory
NUMBER_OF_NODES
Number of nodes to request in a parallel job
PROCESSES_PER_NODE
Number of processes per node to request in a parallel job

When using the WRF4G infrastructure for running locally, one must specify, at least, a base directory (WRF4G_RUN_SHARED) to run the jobs (in the GRID this is not necessary since an specific directory is created for each job and they cannot collide). For a parallel job, this run directory must be shared in order for the worker nodes to find the executables run by the master process. Additionally, to avoid network stress in NFS mounted filesystems, the simulation can have a shared directory (WRF4G_RUN_SHARED) and a local directory (WRF4G_RUN_LOCAL) where the model dumps the output before sending it to the final destination under WRF4G_BASEPATH.

WRF4G_QUEUE_NAME
Name of the queue in a local system where the jobs are to be submitted
WRF4G_QUEUE_WALLTIME
Walltime to be set for each chunk on the queue
WRF4G_QUEUE_MEMORY
Memory to be used by your jobs
WRF4G_QUEUE_PEAKMEM
Some systems (e.g. SGE) allow memory comsumptions over the WRF4G_QUEUE_MEMORY specification and have another hard limit for the memory usage before killing your job.