Hey,
I have noticed a similar problem with v7p0 on the HPC cluster at our university.
I've been running Telemac2D-simulations on 1 node with 20 cores (so simulating on 20 parallel processors) using v6p3 and v7p0.
With v6p3 everything runs smoothly. However, when I run exactly the same simulation on v7p0, I get strange results. The simulation itself seems to be running fine. The listing file does not show any unexpected messages and ends just as the v6p3 simulation:
END OF TIME LOOP
EXITING MPI
CORRECT END OF RUN
ELAPSE TIME : 3 HOURS
54 MINUTES
18 SECONDS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... merging separated result files
+> cas_jul2012
recollectioning: T2DRES
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... handling result files
+> cas_jul2012
moving: RES_SC_EMB1000
My work is done
So far nothing wrong.. But if I open the Result-file in BlueKenue it appears to give corrupted/incorrect results. The first timesteps are still fine but at a certain moment it seems that the stitching between the parallel result files went wrong, giving random patterns, zeros or extremely high values (infinite numbers) for the water surface elevation (see attachment for an example). This happened for several simulations on our cluster.
The strange thing is that when I run the same simulation with the same TELEMAC-2D-configuration on the same cluster using only 12 processors (instead of the available 20 per node), everything runs fine again.
As I'm no expert, Im not sure if this might have to do with the problems Yannick described above, but I thought it was worth mentioning.
Regards,
Jeroen