Changes between Version 27 and Version 28 of WRF4GWRFReforecast


Ignore:
Timestamp:
May 2, 2013 6:41:55 PM (9 years ago)
Author:
MarkelGarcia
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • WRF4GWRFReforecast

    v27 v28  
    117117
    118118The monitoring and maintenance of an experiment can be shared my different users, provided they connect their WRF4G to the same database.
     119
     120== Postprocess ==
     121
     122If the computing resources and the model configuration are stable enough, once running the experiment the monitoring is not time consuming. However, if the experiment is large (e.g. 30 year of daily reforecasts), the postprocessing of the output produced by WRF4G can be a complicated and error-prone task. Here we provide some guidelines to follow to successfully postprocess a reforecast like experiment like this. In this tutorial case it should not be too complicated, since it is only one month.
     123
     124=== 1st step: Generate daily CF-compliant files. ===
     125
     126To generate CF-compliant files from WRF4G raw data should not be complicated, since it is applying a dictionary to variable name and attributes. However, in fact, we want to do many other things:
     127
     128 . Merge the files apart from traducing them to CF.compliant netCDF.
     129 . Perform operations over the available fields: change units,
     130de-accumulate, averages, etc.
     131 . Compute derived fields.
     132 . Split files by variable, and vertical levels.
     133 . Delete spin up and/or overlapping periods and concatenate
     134the files correctly.
     135
     136To deal with them, we created a tool written in python called [http://www.meteo.unican.es/wiki/cordexwrf/SoftwareTools/WrfncXnj WRFnc extract and join], published with a GNU open license. It is documented in that web.
     137
     138What do we need now it to write an small shell script that will call this python script with the proper arguments. First, we need to define the paths of the different files. An useful practice is to write a small file called "dirs" with these paths, something like:
     139
     140{{{
     141BASEDIR="/oceano/gih/work/sw_ncep"
     142EXPDIR="${BASEDIR}/data/raw"
     143DATADIR="${BASEDIR}/data"
     144SCRIPTDIR="${BASEDIR}/scripts"
     145DIAGDIR="${BASEDIR}/diags"
     146FIGSDIR="${BASEDIR}/figures"
     147POSTDIR="${BASEDIR}/data/post_fullres"
     148POST2DIR="${BASEDIR}/data/post_ih"
     149OBSDIR="${BASEDIR}/data/obs"
     150}}}
     151
     152Now writing "source dirs" in our postprocess script we can easily load these variables.