Changes between Initial Version and Version 1 of WRF4G2.0/Resources


Ignore:
Timestamp:
Jun 14, 2015 3:21:34 PM (7 years ago)
Author:
carlos
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • WRF4G2.0/Resources

    v1 v1  
     1= Resource Configuration =
     2
     3The configuration file `resources.conf` is used to describe computing resources. When you start WRF4G, resources.conf file is copied under `~/.wrf4g/etc` directory if it does not exist. The file can be edit directly or by executing `wrf4g resource edit` command.
     4
     5== Configuration format ==
     6
     7The configuration resource file consists of sections, each led by a [section] header, followed by `key = value` entries. Lines beginning with `#` are ignored. Allowing sections are `[DEFAULT]` and `[resource_name]`.
     8
     9== DEFAULT section ==
     10
     11The DEFAULT section provides default values for all other resource sections.
     12
     13== Resource section ==
     14
     15Each resource section has to begin with the line `[resource_name]` followed by `key = value` entries.
     16
     17Configuration keys common to all resources:
     18
     19    * `enable`:          true or false in order to enable or disable a resource.
     20    * `communicator` or  authentication type : 
     21        - `local`:       The resource will be accessed directly.
     22        - `ssh`:         The resource will be accessed through ssh protocol.
     23    * `username`:        username to log on the front-end.
     24    * `frontend`:        The front-end either of a cluster or a grid user interface . The syntax is "host:port", by default the port used is 22.
     25    * `private_key`:     Private key identity file to log on the front-end.
     26    * `scratch`:         Directory used to store temporary files for jobs during their execution, by default, it is `$HOME/.wrf4g/jobs`
     27    * `lrms` or Local Resource Management System :
     28        - `pbs`:           TORQUE/PBS cluster.
     29        - `sge`:           Grid Engine cluster.
     30        - `slurm`:         SLURM cluster.
     31        - `slurm_res`:     [http://www.bsc.es/marenostrum-support-services/res RES(Red Española de Supercomputación)] resources.
     32        - `loadleveler`:   !LoadLeveler cluster.
     33        - `lsf`:           LSF cluster.
     34        - `fork`:          SHELL.
     35        - `cream`:         CREAM Compute Elements (CE).
     36
     37Keys for non-grid resources such as HPC resources:
     38
     39    * `queue`:             Queue available on the resource. If there are several queues, you have to use a "," as follows "queue  = short,medium,long".
     40    * `max_jobs_in_queue`: Max number of jobs in the queue.
     41    * `max_jobs_running`:  Max number of running jobs in the queue.
     42    * `parallel_env`:      It defines the parallel environments available for Grid Engine cluster.
     43    * `project`:           It specifies the project variable and is for TORQUE/PBS,  Grid Engine and LSF clusters.
     44
     45Keys for grid resources:
     46 
     47    * `vo`:                Virtual Organization (VO) name.
     48    * `host_filter`:       A host list for the VO. Each host is separated by a ",". Here is an example: "host_filter = prod-ce-01.pd.infn.it, creamce2.gina.sara.nl".
     49    * `bdii`:              It indicates the BDII host to be used. The syntax is "bdii:port". If you do not specify this variable, the `LCG_GFAL_INFOSYS` environment variable defined on the grid user interface will be used by default.                     
     50    * `myproxy_server`:    Server to store grid credentials. If you do not specify this variable, the `MYPROXY_SERVER` environment variable defined on the grid user interface will be used by default.
     51
     52== Examples ==
     53
     54By default, WRF4G is going to use the local machine as fork lrms:
     55
     56{{{
     57[localmachine]
     58enable            = true
     59communicator      = local
     60frontend          = localhost
     61lrms              = fork
     62max_jobs_running  = 1
     63}}}
     64
     65TORQUE/PBS cluster, accessed through ssh protocol:
     66
     67{{{
     68[meteo]
     69enable            = true
     70communicator      = ssh
     71username          = user
     72frontend          = mar.meteo.unican.es
     73private_key       = ~/.ssh/id_rsa
     74lrms              = pbs
     75queue             = short, medium, long
     76max_jobs_running  = 2, 10, 20
     77max_jobs_in_queue = 6, 20, 40
     78}}}
     79ESR virtual organization, accessed through a grid user interface:
     80
     81{{{
     82[esrVO]
     83enable            = true
     84communicator      = local
     85username          = user
     86frontend          = ui.meteo.unican.es
     87lrms              = cream
     88vo                = esr
     89bdii              = bdii.grid.sara.nl:2170
     90myproxy_server    = px.grid.sara.nl
     91}}}