Changes between Version 30 and Version 31 of WRF4GWRFReforecast


Ignore:
Timestamp:
May 7, 2013 7:59:40 PM (9 years ago)
Author:
MarkelGarcia
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • WRF4GWRFReforecast

    v30 v31  
    135135To generate CF-compliant files from WRF4G raw data should not be complicated, since it is applying a dictionary to variable name and attributes. However, in fact, we want to do many other things:
    136136
    137  . Merge the files apart from traducing them to CF.compliant netCDF.
    138  . Perform operations over the available fields: change units,
     137 * Merge the files apart from traducing them to CF.compliant netCDF.
     138 * Perform operations over the available fields: change units,
    139139de-accumulate, averages, etc.
    140  . Compute derived fields.
    141  . Split files by variable, and vertical levels.
    142  . Delete spin up and/or overlapping periods and concatenate
     140 * Compute derived fields.
     141 * Split files by variable, and vertical levels.
     142 * Delete spin up and/or overlapping periods and concatenate
    143143the files correctly.
    144144
    145145To deal with them, we created a tool written in python called [http://www.meteo.unican.es/wiki/cordexwrf/SoftwareTools/WrfncXnj WRFnc extract and join], published with a GNU open license. It is documented in that web.
     146
     147In the last versions, wrfncxnj can even remove the spin up period, as we can filter a time interval with the --filter-times flag. If we want to average the jumps between realizations, we to retain one extra hour for each day.
    146148
    147149What do we need now it to write an small shell script that will call this python script with the proper arguments. First, we need to define the paths of the different files. An useful practice is to write a small file called "dirs" with these paths, something like:
     
    156158}}}
    157159
    158 Now writing "source dirs" in our postprocess script we can easily load these variables.
     160Now writing "source dirs" in our postprocess script we can easily load these variables. A sample script would be:
     161
     162{{{
     163#!/bin/bash
     164use python
     165
     166year=2010
     167exp="seawind_"
     168fullexp="SEAWIND_NCEP"
     169varlist_xtrm="U10MEANER,V10MEANER"
     170
     171mkdir -p ${POSTDIR}/${year} || exit
     172cd ${POSTDIR}/post_fullres
     173
     174for dir in ${EXPDIR}/${exp}/${exp}__${year}*
     175do
     176  read expname dates <<< ${dir//__/ }
     177  read datei datef <<< ${dates//_/ }
     178  fdatei=$(date -u '+%Y%m%d%H' -d "${datei:0:8} ${datei:8:2} 12 hour")
     179  fdatef=$(date -u '+%Y%m%d%H' -d "${datei:0:8} ${datei:8:2} 36 hour")
     180  expname=$(basename ${expname})
     181  geofile="/home/users/curso01/domains/Europe_30k/geo_em.d01.nc"
     182  xnj="python \
     183    / \
     184    -r 1940-01-01_00:00:00 \
     185    -g ${geofile} -a ${SCRIPTDIR}/xnj_East_Anglia.attr \
     186    --fullfile=${SCRIPTDIR}/wrffull_${dom}.nc \
     187    --split-variables --split-levels --output-format=NETCDF4_CLASSIC \
     188    --temp-dir=/localtmp/xnj.$(date '+%Y%m%d%H%M%S') \
     189    --filter-times=${fdatei},${fdatef} \
     190    --output-pattern=${POSTDIR}/post_fullres/${year}/[varcf]_[level]_${fullexp}_$(dom2dom ${dom})__[firsttime]_[lasttime].nc"
     191
     192  # 
     193  # xtrm
     194  #
     195  filesx=$(find ${dir}/output -name 'wrfxtrm_'${dom}'_*.nc' | sort)
     196  if test $(echo ${filesx} | wc -w) -ne 7 ; then
     197    echo "Wrong number of files on $datei"
     198    continue
     199  fi
     200  ${xnj} -a wrfnc_extract_and_join.gattr_SEAWIND \
     201  -t ${SCRIPTDIR} -v ${varlist_xtrm} ${filesx}
     202done
     203
     204}}}
     205
     206These scripts can be sent to the queue with the proper command: qsub or msub. Either they can be run in interactive mode. Final files need to be checked to correct holes or incorrect values.
     207
     208=== 2nd step: Concatenate the daily realizations ===
     209
     210After the first step, we now have much more friendly files, so this step can be carried out with many of the netCDF supporting languages available. We have carried out this task with [https://code.zmaw.de/projects/cdo Climate Data Operators] (CDO) or with [http://code.google.com/p/netcdf4-python/ netcdf4-python] package.
     211
     212A python script we created called py_netcdf_merge_and_average.py can very much simplify this task.
     213
     214{{{
     215Usage: py_netcdf_merge_and_average.py [options]
     216
     217Options:
     218  -h, --help            show this help message and exit
     219  -f                    Overwrite the output file if exists.
     220  --selyear=YEAR        Filter time steps outside this year
     221  --bdy_points=BDY_POINTS
     222                        Number of points to delete from the boundaries
     223  --average-jumps       If True, averages the repeated timesteps found when
     224                        concatenating the input files to try to smooth
     225                        jumpyness
     226}}}