# Changes between Version 30 and Version 31 of WRF4GWRFReforecast

Ignore:
Timestamp:
May 7, 2013 7:59:40 PM (9 years ago)
Comment:

--

### Legend:

Unmodified
 v30 To generate CF-compliant files from WRF4G raw data should not be complicated, since it is applying a dictionary to variable name and attributes. However, in fact, we want to do many other things: . Merge the files apart from traducing them to CF.compliant netCDF. . Perform operations over the available fields: change units, * Merge the files apart from traducing them to CF.compliant netCDF. * Perform operations over the available fields: change units, de-accumulate, averages, etc. . Compute derived fields. . Split files by variable, and vertical levels. . Delete spin up and/or overlapping periods and concatenate * Compute derived fields. * Split files by variable, and vertical levels. * Delete spin up and/or overlapping periods and concatenate the files correctly. To deal with them, we created a tool written in python called [http://www.meteo.unican.es/wiki/cordexwrf/SoftwareTools/WrfncXnj WRFnc extract and join], published with a GNU open license. It is documented in that web. In the last versions, wrfncxnj can even remove the spin up period, as we can filter a time interval with the --filter-times flag. If we want to average the jumps between realizations, we to retain one extra hour for each day. What do we need now it to write an small shell script that will call this python script with the proper arguments. First, we need to define the paths of the different files. An useful practice is to write a small file called "dirs" with these paths, something like: }}} Now writing "source dirs" in our postprocess script we can easily load these variables. Now writing "source dirs" in our postprocess script we can easily load these variables. A sample script would be: {{{ #!/bin/bash use python year=2010 exp="seawind_" fullexp="SEAWIND_NCEP" varlist_xtrm="U10MEANER,V10MEANER" mkdir -p ${POSTDIR}/${year} || exit cd ${POSTDIR}/post_fullres for dir in${EXPDIR}/${exp}/${exp}__${year}* do read expname dates <<<${dir//__/ } read datei datef <<< ${dates//_/ } fdatei=$(date -u '+%Y%m%d%H' -d "${datei:0:8}${datei:8:2} 12 hour") fdatef=$(date -u '+%Y%m%d%H' -d "${datei:0:8} ${datei:8:2} 36 hour") expname=$(basename ${expname}) geofile="/home/users/curso01/domains/Europe_30k/geo_em.d01.nc" xnj="python \ / \ -r 1940-01-01_00:00:00 \ -g${geofile} -a ${SCRIPTDIR}/xnj_East_Anglia.attr \ --fullfile=${SCRIPTDIR}/wrffull_${dom}.nc \ --split-variables --split-levels --output-format=NETCDF4_CLASSIC \ --temp-dir=/localtmp/xnj.$(date '+%Y%m%d%H%M%S') \ --filter-times=${fdatei},${fdatef} \ --output-pattern=${POSTDIR}/post_fullres/${year}/[varcf]_[level]_${fullexp}_$(dom2dom ${dom})__[firsttime]_[lasttime].nc" # # xtrm # filesx=$(find ${dir}/output -name 'wrfxtrm_'${dom}'_*.nc' | sort) if test $(echo${filesx} | wc -w) -ne 7 ; then echo "Wrong number of files on $datei" continue fi${xnj} -a wrfnc_extract_and_join.gattr_SEAWIND \ -t ${SCRIPTDIR} -v${varlist_xtrm} \${filesx} done }}} These scripts can be sent to the queue with the proper command: qsub or msub. Either they can be run in interactive mode. Final files need to be checked to correct holes or incorrect values. === 2nd step: Concatenate the daily realizations === After the first step, we now have much more friendly files, so this step can be carried out with many of the netCDF supporting languages available. We have carried out this task with [https://code.zmaw.de/projects/cdo Climate Data Operators] (CDO) or with [http://code.google.com/p/netcdf4-python/ netcdf4-python] package. A python script we created called py_netcdf_merge_and_average.py can very much simplify this task. {{{ Usage: py_netcdf_merge_and_average.py [options] Options: -h, --help            show this help message and exit -f                    Overwrite the output file if exists. --selyear=YEAR        Filter time steps outside this year --bdy_points=BDY_POINTS Number of points to delete from the boundaries --average-jumps       If True, averages the repeated timesteps found when concatenating the input files to try to smooth jumpyness }}}