Hi,
The procedure is quite similar to the one described in the installation note. However, experience on cluster and job manager is more than welcome here for running a case.
Compiling TELEMAC:
The cluster administrator must indicate which {compiler,MPI} are available on the cluster. TELEMAC accepts both MPICH2, Intel-MPI & OpenMPI. Never tried MVAPICH;
You must adress the MPI targets in the
systel.ini, according to the system. the DIRLIB I used on our cluster (Ivanoe) is available online (search for ivanoe). On this cluster, we are using
mpiexc.hydra instead of
mpirun, the launching sequence is then a bit different.
I may advise you to export TELEMAC variables in a source file (say
source_telemacv6p1_intel11.sh)
export PATH=/PATH/TO/TELEMAC/V6P1/bin:.:$PATH
export SYSTELCFG=/PATH/TO/TELEMAC/V6P1/config
Source this environment with
"source source_telemacv6p1_intel11.sh". It's similar to setting variables in a .bat file on Windows but can be very useful when sharing a version between multiple users.
Infiniband is often automatically selected when using more than 1 node;
Running TELEMAC:
Then, here comes the main difference. Instead of filling manually the
mpi_telemac.conf file, you must get the node list through the job manager. Depending on your case, you must specify the number of nodes, memory, walltime... This step depends on the job manager : Torque, Maui, Slurm/Sbatch...?
When using interactively a node, you can fill manually the
mpi_telemac.conf file;
Otherwise, the job manager answers by giving a node list. However, TELEMAC expects a node list file named
mpi_telemac.conf[\i], so a little script is needed at this step
When using Open-MPI, the PATH to this library should be added in the user profile (ie. .bashrc, .profile...). This must be done when orterun ou mpirun is not recognized when running a case.
Well, hope it will somehow help...
Best regards,
Fabien Decung