Hello,
Indeed GRETEL is the program that re-assemble results once the simulation is completed. Here is a little more information on the procedure ...
The running script (in Python in your case, telemac3d.py, which calls runcode.py) acts only as a manager for the whole procedure. It creates the temporary directory, copies the input files, call on PARTEL (to split the domain if parallel), calls on the executable of the TELEMAC module, calls on GRETEL (to assemble the results if parallel), copies the output files and finally delete the temporary directory. If running in scalar mode, the call to the executable of TELEMAC is just that. If running in parallel mode, that call is in fact a call to your MPI command/service (mpiexec...) which itself include the name of the TELEMAC executable.
This is why you will not find TELEMAC error message in the Python running scripts, and also why you will not find MPI error messages in neither TELEMAC nor the running scripts.
The problem you have seems to be related to your installation of MPI (openMPI in your case) -- it is possible that it is processor dependent. Is is possible that the versions differ ? Or can you set your gfortran optimisation to not higher than -O3.
It seems your simulation fails before it finishes and therefore before it goes into GRETEL. (I am not 100% sure myself why the -lz has to be removed on some computers)
Please note that every (binary) files in the temporary directory are also SELAFIN files (T2DRES and T2DHYD, etc...). You can therefore look at these individually.
Finally, here is the information to run GRETEL (on Linux from within your temporary directory). You can find this in the runGRETEL function of runcode.py:
gretel_autop < gretel.par
where gretel.par is a local file including 3 lines (1: name of the file to be recollected; 2: name of the global GEO file, i.e. T2DGEO; 3: the number splits), for instance:
T2DRES
T2DGEO
2
Hope this helps.
Sébastien.