yes, it should be possible -- but we would rather wish to modify the systel.cfg and then make sure runcode does things appropriately.
At the moment, runcode support a systel.cfg that includes "parallel mpi hpc" as options. We are current using the system.cfg to launch TELEMAC on HPC queues such as BSUB, QSUB and PBS.
Let us know what you need / scripts, config files, and I am sure we could help.
For information, with BSUB:
[Configurations]
configs: dab.tile9
[dab.tile9]
#
root: /gpfs/ocf/ig_5895/shared/opentelemac/dab
version: v6p1
language: 2
modules: update system
options: parallel mpi hpc
#
mpi_hosts: mg01
mpi_cmdexec: /gpfs/packages/openmpi/1.4.4/gcc/bin/mpiexec -wdir <wdir> -n <ncsize> <exename>
#
# <jobname> and <email> need to be provided on the TELEMAC command line #BSUB -u <email> \n #BSUB -N
hpc_stdin: #!/bin/bash
#BSUB -n <ncsize>
#BSUB -J <jobname>
#BSUB -o <sortiefile>
#BSUB -e <exename>.%J.err
#BSUB -R "span[ptile=9]"
<mpi_cmdexec>
exit
#
hpc_cmdexec: chmod 755 <hpc_stdin>; bsub -q encore < <hpc_stdin>
#
cmd_obj: gfortran -c -O3 -ffixed-line-length-132 -fconvert=big-endian -frecord-marker=4 <mods> <incs> <f95name>
cmd_lib: ar cru <libname> <objs>
cmd_exe: mpif90 -fconvert=big-endian -frecord-marker=4 -v -lm -lz -o <exename> <objs> <libs>
#
mods_all: -I <config>
#
incs_parallel: -I /gpfs/packages/openmpi/1.4.4/gcc/include/
libs_parallel: /gpfs/ocf/ig_5895/shared/opentelemac/libs/libmetis.a
libs_all: /gpfs/packages/openmpi/1.4.4/gcc/lib/libmpi.so
#
sfx_zip: .gztar
sfx_lib: .lib
sfx_obj: .o
sfx_mod: .mod
sfx_exe:
#
Another example with QSUB:
[Configurations]
configs: eticsm
[eticsm]
#
root: /gpfs/rrcfd/public/apps/opentelemac/eticsm
version: v6p2
language: 2
modules: update system
options: parallel mpi hpc
#
mpi_cmdexec: mpiexec -wdir <wdir> -n <ncsize> <exename>
#
par_cmdexec: <config>/partel_prelim; python <root>/pytel/utils/partitioning.py
#
# <jobname> and <email> need to be provided on the TELEMAC command line #BSUB -u <email> \n #BSUB -N
hpc_stdin: #!/bin/bash
#$ -cwd # working directory is current directory
#$ -V # forward your current environment to the execution environment
#$ -pe mpi-12x1 96 # no of cores requested
#$ -S /bin/bash # shell it will be executed in
#$ -j y # merge stderr and stdout
cat $PE_HOSTFILE | awk '{print $1, " slots=9"}' > machinefile.$JOB_ID
cat machinefile.$JOB_ID
<mpi_cmdexec>
exit
#
hpc_cmdexec: chmod 755 <hpc_stdin>; qsub < <hpc_stdin>
#
cmd_obj: ifort -c -O3 -convert big_endian -132 <mods> <incs> <f95name>
cmd_lib: ar cru <libname> <objs>
cmd_exe: mpif90 -convert big_endian -lm -lz -o <exename> <objs> <libs>
#
mods_all: -I <config>
#
incs_parallel: -I /usr/mpi/intel/openmpi-1.4.2/include/
libs_parallel: /gpfs/rrcfd/public/apps/opentelemac/lib/libmetis.a
libs_all : /usr/mpi/intel/openmpi-1.4.2/lib64/libmpi.so
#
sfx_zip: .zip
sfx_lib: .lib
sfx_obj: .o
sfx_mod: .mod
sfx_exe:
#
The key keys are options, hpc_stdin and hpc_cmdexec
Hope this helps.
Sébastien.