Posts in category APEL

How to deploy a CREAM CE

The following deployment models are possible for a CREAM-CE:

  • CREAM-CE can be configured without worrying about the glite-CLUSTER node. This can be useful for small sites who don't want to worry about cluster/subcluster configurations because they have a very simple setup. In this case CREAM-CE will publish a single cluster/subcluster. This is called no cluster mode. This is done as described below by defining the yaim setting CREAMCE_CLUSTER_MODE=no (or by no defining at all that variable).
  • CREAM-CE can work on cluster mode using the glite-CLUSTER node type. This is done as described below by defining the yaim setting CREAMCE_CLUSTER_MODE=yes. The CREAM-CE can be in the same host or in a different host wrt the glite-CLUSTER node.

Installation of a CREAM CE node in no cluster mode

We select this mode because it is the easier way to deploy a CREAM CE site. This configuration of a CREAM CE in no cluster mode using Torque as batch system, with the CREAM CE not being also Torque server. Also, CREAM CE will be APEL Publisher and BDII site.

  • Repositories
    • the EPEL repository
    • the EMI middleware repository
    • the CA repository
      [root@ce01 ~]# cat /etc/yum.repos.d/epel.repo 
      [epel]
      name=Extra Packages for Enterprise Linux 5 - $basearch
      #baseurl=http://download.fedoraproject.org/pub/epel/5/$basearch
      mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-5&arch=$basearch
      failovermethod=priority
      enabled=1
      gpgcheck=1
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL
      
      [epel-debuginfo]
      name=Extra Packages for Enterprise Linux 5 - $basearch - Debug
      #baseurl=http://download.fedoraproject.org/pub/epel/5/$basearch/debug
      mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-debug-5&arch=$basearch
      failovermethod=priority
      enabled=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL
      gpgcheck=1
      
      [epel-source]
      name=Extra Packages for Enterprise Linux 5 - $basearch - Source
      #baseurl=http://download.fedoraproject.org/pub/epel/5/SRPMS
      mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=epel-source-5&arch=$basearch
      failovermethod=priority
      enabled=0
      gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL
      gpgcheck=1
      
      [root@ce01 ~]# cat /etc/yum.repos.d/UMD-1-base.repo 
      [UMD-1-base]
      name=UMD 1 base (SL5)
      baseurl=http://repository.egi.eu/sw/production/umd/1/sl5/$basearch/base
      protect=1
      enabled=1
      # To use priorities you must have yum-priorities installed
      priority=45
      gpgcheck=1
      gpgkey=http://emisoft.web.cern.ch/emisoft/dist/EMI/1/RPM-GPG-KEY-emi http://repo-rpm.ige-project.eu/RPM-GPG-KEY-IGE
      [root@ce01 ~]# cat /etc/yum.repos.d/UMD-1-updates.repo 
      [UMD-1-updates]
      name=UMD 1 updates (SL5)
      baseurl=http://repository.egi.eu/sw/production/umd/1/sl5/$basearch/updates
      protect=1
      enabled=1
      # To use priorities you must have yum-priorities installed
      priority=40
      gpgcheck=1
      gpgkey=http://emisoft.web.cern.ch/emisoft/dist/EMI/1/RPM-GPG-KEY-emi http://repo-rpm.ige-project.eu/RPM-GPG-KEY-IGE
      
      [root@ce01 ~]# cat /etc/yum.repos.d/egi-trustanchors.repo 
      [EGI-trustanchors]
      name=EGI-trustanchors
      baseurl=http://repository.egi.eu/sw/production/cas/1/current/
      gpgkey=http://repository.egi.eu/sw/production/cas/1/GPG-KEY-EUGridPMA-RPM-3
      gpgcheck=1
      enabled=1
      
  • yum install
    [root@ce01 ~]# yum clean all
    [root@ce01 ~]# yum install yum-protectbase
    [root@ce01 ~]# yum install ca-policy-egi-core 
    [root@ce01 ~]# yum install xml-commons-apis 
    [root@ce01 ~]# yum install emi-cream-ce
    [root@ce01 ~]# yum install emi-torque-utils
    [root@ce01 ~]# yum install openldap2.4 openldap2.4-servers
    [root@ce01 ~]# yum install nfs-utils.x86_64
    
  • CREAM CE update
    [root@ce01 ~]# yum update
    
  • Torque

If you want to install a different version of torque for some reason (to see the current version rpm -qa | grep torque-clitent), you can execute these commands:

[root@ce01 ~]# tar xzvf torque-2.3.9.tar.gz
[root@ce01 ~]# cd torque-2.3.9
[root@ce01 ~]# ./configure --prefix=/usr
[root@ce01 ~]# make install
  • Install host certificate

Once you have obtained a valid certificate. You have to create the hostcert.pem and hostkey.pem and place in the /etc/grid-security directory. Then set the proper mode and ownerships doing:

[root@ce01 ~]# cd /etc/grid-security/
[root@ce01 ~]# openssl pkcs12 -nocerts -nodes -in ce02.p12 -out hostkey.pem
[root@ce01 ~]# openssl pkcs12 -clcerts -nokeys -in ce02.p12 -out hostcert.pem
[root@ce01 ~]# chown root.root hostcert.pem
[root@ce01 ~]# chown root.root hostkey.pem
[root@ce01 ~]# chmod 644 hostcert.pem
[root@ce01 ~]# chmod 400 hostkey.pem
  • Create site-info.def file for YAIM
    [root@ce01 ~]# cat siteinfo/site_ce.def 
    MY_DOMAIN=macc.unican.es
    INSTALL_ROOT=/opt
    
    #SERVICIOS CENTRALES 
    RB_HOST=rb-eela.ceta-ciemat.es 
    LB_HOST=rb-eela.ceta-ciemat.es 
    WMS_HOST=wms-eela.ceta-ciemat.es 
    MON_HOST=ce01.$MY_DOMAIN
    LFC_HOST=lfc01.lip.pt
    BDII_HOST=bdii.pic.es
    GSSKLOG=no
    
    PX_HOST=grid001.ct.infn.it  
    REG_HOST=nosirve
    
    WN_LIST=/root/siteinfo/wn-list.conf
    USERS_CONF=/root/siteinfo/users.conf
    GROUPS_CONF=/root/siteinfo/groups.conf
    SLAPD=/usr/sbin/slapd2.4
    YAIM_VERSION=4.0.3
    
    INSTALL_ROOT=/opt
    OUTPUT_STORAGE=/tmp/jobOutput
    JAVA_LOCATION=/usr
    CRON_DIR=/etc/cron.d
    GLOBUS_TCP_PORT_RANGE="20000,25000"
    
    #########################
    #CREAM CE
    ########################
    CEMON_HOST=ce01.$MY_DOMAIN
    CREAM_DB_USER=creamdb
    CREAM_DB_PASSWORD=cream1730
    CREAM_CE_STATE="Production"
    CREAMCE_CLUSTER_MODE=no
    CE_CAPABILITY="none"
    CE_OTHERDESCR="Cores=2"
    SE_MOUNT_INFO_LIST="$DPM_FILESYSTEMS"
    CONFIG_MAUI=no
    
    ########################
    
    APEL_MYSQL_HOST=$MON_HOST
    MYSQL_PASSWORD=mysql1730
    APEL_DB_PASSWORD="apel1730"
    APEL_PUBLISH_USER_DN=yes
    
    GRIDICE_SERVER_HOST=$MON_HOST
    GRIDICE_MON_WN=no
    GRIDICE_HIDE_USER_DN=no
    
    
    #########################################
    # Torque server configuration variables #
    #########################################
    BATCH_SERVER=encina
    JOB_MANAGER=lcgpbs
    CE_BATCH_SYS=torque
    BATCH_BIN_DIR=/usr/bin/
    BATCH_VERSION=torque-2.1.9
    BATCH_LOG_DIR=/var/spool/torque/
    CE_PHYSCPU=14 
    CE_LOGCPU=14 
    CE_OS_ARCH=x86_64
    
    
    ##############################
    # CE configuration variables #
    ##############################
    
    CE_HOST=ce01.$MY_DOMAIN
    CE_CPU_MODEL=PD   #PENTIUM D 930, 3.0GHZ/2X2
    CE_CPU_VENDOR=intel
    CE_CPU_SPEED=3000
    CE_OS="ScientificSL"
    CE_OS_RELEASE=5.5
    CE_OS_VERSION="SLC"
    CE_MINPHYSMEM=2048
    CE_MINVIRTMEM=4096 #/proc/meminfo
    CE_SMPSIZE=2
    CE_SI00=381
    CE_SF00=0
    CE_OUTBOUNDIP=TRUE
    CE_INBOUNDIP=TRUE
    CE_RUNTIMEENV="
        LCG-2
        LCG-2_1_0
        LCG-2_1_1
        LCG-2_2_0
        LCG-2_3_0
        LCG-2_3_1
        LCG-2_4_0
        LCG-2_5_0
        LCG-2_6_0
        LCG-2_7_0
        GLITE-3_0_0
        GLITE-3_1_0
        R-GMA
    "
    
    ###############################
    # DPM configuration variables #
    ###############################
    
    DPM_HOST="se01.$MY_DOMAIN"   # my-dpm.$MY_DOMAIN. DPM head node hostname
    DPMPOOL=permanent #the_dpm_pool_name
    DPM_FILESYSTEMS="$DPM_HOST:/storage"
    DPM_DB_USER=dpm-db-user
    DPM_DB_PASSWORD=dpm1730
    DPM_DB_HOST=$DPM_HOST
    DPM_INFO_USER=dpm-info-user
    DPM_INFO_PASS=dpm1730
    DPMFSIZE=200M
    
    ###########
    # SE_LIST #
    ###########
    SE_LIST="$DPM_HOST"
    SE_ARCH="multidisk" # "disk, tape, multidisk, other"
    
    
    ################################
    # BDII configuration variables #
    ################################
    SITE_BDII_HOST=ce01.$MY_DOMAIN
    SITE_DESC="University of Cantabria site"
    SITE_SECURITY_EMAIL="iglesiaa@gestion.unican.es"
    SITE_EMAIL=grid-prod@unican.es
    SITE_NAME=UNICAN
    SITE_LOC="Santander,SPAIN"
    SITE_LAT=43.5
    SITE_LONG=-3.8
    SITE_COUNTRY=Spain
    SITE_WEB="http://www.unican.es"
    SITE_SUPPORT_EMAIL=grid-prod@unican.es
    SITE_OTHER_GRID="EGEE|EELA"
    EGEE_ROC="SWE"
    BDII_REGIONS="BDII CE SE"    # list of the services provided by the site
    
    # If you the node type is using BDII instead (all 3.1 nodes)
    # change the port to 2170 and mds-vo-name=resource
    BDII_BDII_URL="ldap://$SITE_BDII_HOST:2170/mds-vo-name=resource,o=grid"
    BDII_CE_URL="ldap://$CE_HOST:2170/mds-vo-name=resource,o=grid"
    BDII_SE_URL="ldap://$DPM_HOST:2170/mds-vo-name=resource,o=grid"
    BDII_DPM_URL="ldap://$DPM_HOST:2170/mds-vo-name=resource,o=grid"
    
    BDII_RESOURCE_TIMEOUT=30
    GIP_RESPONSE=30
    GIP_FRESHNESS=60
    GIP_CACHE_TTL=300
    GIP_TIMEOUT=150
    
    ##############################
    # VO configuration variables #
    ##############################
    
    VOS="esr ops dteam prod.vo.eu-eela.eu oper.vo.eu-eela.eu chem.vo.ibergrid.eu eng.vo.ibergrid.eu ict.vo.ibergrid.eu ops.vo.ibergrid.eu social.vo.ibergrid.eu earth.vo.ibergrid.eu iber.vo.ibergrid.eu life.vo.ibergrid.eu phys.vo.ibergrid.eu"
    
    QUEUES="grid"
    
    VO_SW_DIR=/opt/exp_soft
    EDG_WL_SCRATCH=""
    
    GRID_GROUP_ENABLE=$VOS
    
    #####
    #esr#
    #####
    VO_ESR_SW_DIR=$VO_SW_DIR/esr
    VO_ESR_DEFAULT_SE=$DPM_HOST
    VO_ESR_STORAGE_DIR=$CLASSIC_STORAGE_DIR/esr
    VO_ESR_VOMS_SERVERS="'vomss://voms.grid.sara.nl:8443/voms/esr?/esr/'"
    VO_ESR_VOMSES="'esr voms.grid.sara.nl 30001 /O=dutchgrid/O=hosts/OU=sara.nl/CN=voms.grid.sara.nl esr'"
    VO_ESR_VOMS_CA_DN="'/C=NL/O=NIKHEF/CN=NIKHEF medium-security certification auth'"
    
    #########
    # dteam #
    #########
    VO_DTEAM_SW_DIR=$VO_SW_DIR/dteam
    VO_DTEAM_DEFAULT_SE=$DPM_HOST
    VO_DTEAM_STORAGE_DIR=$CLASSIC_STORAGE_DIR/dteam
    VO_DTEAM_VOMS_SERVERS="'vomss://voms.cern.ch:8443/voms/dteam?/dteam/'"
    VO_DTEAM_VOMSES="'dteam lcg-voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch dteam 24' 'dteam voms.cern.ch 15004 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch dteam 24'"
    VO_DTEAM_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"
    
    #######
    # ops #
    #######
    VO_OPS_SW_DIR=$VO_SW_DIR/ops
    VO_OPS_DEFAULT_SE=$DPM_HOST
    VO_OPS_STORAGE_DIR=$CLASSIC_STORAGE_DIR/ops
    VO_OPS_VOMS_SERVERS="vomss://voms.cern.ch:8443/voms/ops?/ops/"
    VO_OPS_VOMSES="'ops lcg-voms.cern.ch 15009 /DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch ops 24' 'ops voms.cern.ch 15009 /DC=ch/DC=cern/OU=computers/CN=voms.cern.ch ops 24'"
    VO_OPS_VOMS_CA_DN="'/DC=ch/DC=cern/CN=CERN Trusted Certification Authority' '/DC=ch/DC=cern/CN=CERN Trusted Certification Authority'"
    
    ######
    #EELA#
    ######
    #oper
    VO_OPER_VO_EU_EELA_EU_SW_DIR=$VO_SW_DIR/eelaoper
    VO_OPER_VO_EU_EELA_EU_DEFAULT_SE=$DPM_HOST
    VO_OPER_VO_EU_EELA_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/eelaoper
    VO_OPER_VO_EU_EELA_EU_VOMS_SERVERS="'vomss://voms.eela.ufrj.br:8443/voms/oper.vo.eu-eela.eu?/oper.vo.eu-eela.eu'"
    VO_OPER_VO_EU_EELA_EU_VOMSES="'oper.vo.eu-eela.eu voms.eela.ufrj.br 15004 /C=BR/O=ICPEDU/O=UFF BrGrid CA/O=UFRJ/OU=IF/CN=host/voms.eela.ufrj.br oper.vo.eu-eela.eu' 'oper.vo.eu-eela.eu voms-eela.ceta-ciemat.es 15004 /DC=es/DC=irisgrid/O=ceta-ciemat/CN=host/voms-eela.ceta-ciemat.es oper.vo.eu-eela.eu'"
    VO_OPER_VO_EU_EELA_EU_VOMS_CA_DN="'/C=BR/O=ICPEDU/O=UFF BrGrid CA/CN=UFF Brazilian Grid Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    #prod
    VO_PROD_VO_EU_EELA_EU_SW_DIR=$VO_SW_DIR/eelaprod
    VO_PROD_VO_EU_EELA_EU_DEFAULT_SE=$DPM_HOST
    VO_PROD_VO_EU_EELA_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/eelaprod
    VO_PROD_VO_EU_EELA_EU_VOMS_SERVERS="'vomss://voms.eela.ufrj.br:8443/voms/prod.vo.eu-eela.eu?/prod.vo.eu-eela.eu'"
    VO_PROD_VO_EU_EELA_EU_VOMSES="'prod.vo.eu-eela.eu voms.eela.ufrj.br 15003 /C=BR/O=ICPEDU/O=UFF BrGrid CA/O=UFRJ/OU=IF/CN=host/voms.eela.ufrj.br prod.vo.eu-eela.eu' 'prod.vo.eu-eela.eu voms-eela.ceta-ciemat.es 15003 /DC=es/DC=irisgrid/O=ceta-ciemat/CN=host/voms-eela.ceta-ciemat.es prod.vo.eu-eela.eu'"
    VO_PROD_VO_EU_EELA_EU_VOMS_CA_DN="'/C=BR/O=ICPEDU/O=UFF BrGrid CA/CN=UFF Brazilian Grid Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    ################
    # IBERGRID VOS #
    ################
    # ops.vo.ibergrid.eu
    VO_OPS_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_OPS_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_OPS_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_OPS_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/ops.vo.ibergrid.eu?/ops.vo.ibergrid.eu'"
    VO_OPS_VO_IBERGRID_EU_VOMSES="'ops.vo.ibergrid.eu voms01.ncg.ingrid.pt 40001 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt ops.vo.ibergrid.eu' 'ops.vo.ibergrid.eu ibergrid-voms.ifca.es 40001 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es ops.vo.ibergrid.eu'"
    VO_OPS_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # iber.vo.ibergrid.eu
    VO_IBER_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_IBER_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_IBER_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_IBER_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/iber.vo.ibergrid.eu?/iber.vo.ibergrid.eu'"
    VO_IBER_VO_IBERGRID_EU_VOMSES="'iber.vo.ibergrid.eu voms01.ncg.ingrid.pt 40003 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt iber.vo.ibergrid.eu' 'iber.vo.ibergrid.eu ibergrid-voms.ifca.es 40003 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es iber.vo.ibergrid.eu'"
    VO_IBER_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # eng.vo.ibergrid.eu
    VO_ENG_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_ENG_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_ENG_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_ENG_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/eng.vo.ibergrid.eu?/eng.vo.ibergrid.eu"
    VO_ENG_VO_IBERGRID_EU_VOMSES="'eng.vo.ibergrid.eu voms01.ncg.ingrid.pt 40013 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt eng.vo.ibergrid.eu' 'eng.vo.ibergrid.eu ibergrid-voms.ifca.es 40013 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es eng.vo.ibergrid.eu'"
    VO_ENG_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # ict.vo.ibergrid.eu
    VO_ICT_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_ICT_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_ICT_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_ICT_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/ict.vo.ibergrid.eu?/ict.vo.ibergrid.eu"
    VO_ICT_VO_IBERGRID_EU_VOMSES="'ict.vo.ibergrid.eu voms01.ncg.ingrid.pt 40008 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt ict.vo.ibergrid.eu' 'ict.vo.ibergrid.eu ibergrid-voms.ifca.es 40008 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es ict.vo.ibergrid.eu'"
    VO_ICT_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # life.vo.ibergrid.eu
    VO_LIFE_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test 
    VO_LIFE_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_LIFE_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_LIFE_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/life.vo.ibergrid.eu?/life.vo.ibergrid.eu"
    VO_LIFE_VO_IBERGRID_EU_VOMSES="'life.vo.ibergrid.eu voms01.ncg.ingrid.pt 40010 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt life.vo.ibergrid.eu' 'life.vo.ibergrid.eu ibergrid-voms.ifca.es 40010 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es life.vo.ibergrid.eu'"
    VO_LIFE_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # earth.vo.ibergrid.eu
    VO_EARTH_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_EARTH_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_EARTH_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_EARTH_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/earth.vo.ibergrid.eu?/earth.vo.ibergrid.eu"
    VO_EARTH_VO_IBERGRID_EU_VOMSES="'earth.vo.ibergrid.eu voms01.ncg.ingrid.pt 40011 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt earth.vo.ibergrid.eu' 'earth.vo.ibergrid.eu ibergrid-voms.ifca.es 40011 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es earth.vo.ibergrid.eu'"
    VO_EARTH_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # phys.vo.ibergrid.eu
    VO_PHYS_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_PHYS_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_PHYS_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_PHYS_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/phys.vo.ibergrid.eu?/phys.vo.ibergrid.eu"
    VO_PHYS_VO_IBERGRID_EU_VOMSES="'phys.vo.ibergrid.eu voms01.ncg.ingrid.pt 40007 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt phys.vo.ibergrid.eu' 'phys.vo.ibergrid.eu ibergrid-voms.ifca.es 40007 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es phys.vo.ibergrid.eu'"
    VO_PHYS_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # social.vo.ibergrid.eu
    VO_SOCIAL_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_SOCIAL_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_SOCIAL_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_SOCIAL_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/social.vo.ibergrid.eu"?/social.vo.ibergrid.eu""
    VO_SOCIAL_VO_IBERGRID_EU_VOMSES="'social.vo.ibergrid.eu voms01.ncg.ingrid.pt 40012 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt social.vo.ibergrid.eu' 'social.vo.ibergrid.eu ibergrid-voms.ifca.es 40012 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es social.vo.ibergrid.eu'"
    VO_SOCIAL_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    # chem.vo.ibergrid.eu
    VO_CHEM_VO_IBERGRID_EU_SW_DIR=$VO_SW_DIR/test
    VO_CHEM_VO_IBERGRID_EU_DEFAULT_SE=$DPM_HOST
    VO_CHEM_VO_IBERGRID_EU_STORAGE_DIR=$CLASSIC_STORAGE_DIR/test
    VO_CHEM_VO_IBERGRID_EU_VOMS_SERVERS="'vomss://voms01.ncg.ingrid.pt:8443/voms/chem.vo.ibergrid.eu"?/chem.vo.ibergrid.eu""
    VO_CHEM_VO_IBERGRID_EU_VOMSES="'chem.vo.ibergrid.eu voms01.ncg.ingrid.pt 40009 /C=PT/O=LIPCA/O=LIP/OU=Lisboa/CN=voms01.ncg.ingrid.pt chem.vo.ibergrid.eu' 'chem.vo.ibergrid.eu ibergrid-voms.ifca.es 40009 /DC=es/DC=irisgrid/O=ifca/CN=host/ibergrid-voms.ifca.es chem.vo.ibergrid.eu'"
    VO_CHEM_VO_IBERGRID_EU_VOMS_CA_DN="'/C=PT/O=LIPCA/CN=LIP Certification Authority' '/DC=es/DC=irisgrid/CN=IRISGridCA'"
    
    
    #YAIM_LOGGING_LEVEL=WARNING
    YAIM_LOGGING_LEVEL=DEBUG
    
  • Users and groups configuration

Define pool accounts (users.conf) and groups (groups.conf) for several VOs

  • WN list configuration

Set in this file the WNs list (wn-list.conf)

  • Run yaim

After having filled the siteinfo.def file, run yaim:

[root@ce01 ~]# /opt/glite/yaim/bin/yaim -c -s site-info.def -n creamCE -n TORQUE_utils -n glite-APEL -n site-BDII
  • Sharing of the CREAM sandbox area between the CREAM CE and the WN for Torque

When Torque is used as batch system, to share the CREAM sandbox area between the CREAM CE node and the WNs:

Mount the cream_sandbox directory also in the WNs. Let's assume that in the CE node the cream sandbox directory is called /var/cream_sandbox and on the WN is mounted as /cream_sandbox) On the WNs, add the following to the Torque client config file:

$usecp <CE node>://var/cream_sandbox /cream_sandbox

  • Sharing of the job accounting

The accounting service running on the CREAM CE will periodically check for new data in the directory /var/spool/torque/server_priv/accounting. If this directory does not exist on the CREAM CE, you need to export this directory from the batch system server to the compute element.

How to republish APEL information

If you have problems with Nagios' test such as:

  • org.apel.APEL-Pub
  • org.apel.APEL-Sync

It could be that APEL has not published infomation since XX days ago.

To update the infomation, you can follow the steps below:

  • Change <Logs searchSubDirs="yes" reprocess="no"> into <Logs searchSubDirs="yes" reprocess="yes"> in /etc/glite-apel-pbs/parser-config-yaim.xml file
  • Change <Republish>missing</Republish> into <Republish>all</Republish> in /etc/glite-apel-publisher/publisher-config-yaim.xml file
  • Run these scripts:
    $ env APEL_HOME=/ /usr/bin/apel-pbs-log-parser -f /etc/glite-apel-pbs/parser-config-yaim.xml >> /var/log/apel.log 2>&1
    $ env APEL_HOME=/ JAVA_HOME=/usr /usr/bin/apel-publisher -f /etc/glite-apel-publisher/publisher-config-yaim.xml >> /var/log/apel.log 2>&1