Dear all,
Historic remark (I hope understandable): we have had once (before 2008 and in the framework of another code, UnTRIM) memory leaks in the parallelised characteristics just in the engine collecting tracebacks lost form a mesh partition and sending them to where they are gone, as well as in the final stage where they are sent back to where they origin from.
It was due to permanent allocation and deallocation of buffers (of variable lengths from time step to time step) where they were collected and - the real cause of problems - the fact that -some- compilers (before and in 2008) for this purpose always took fresh memory and did not re-use the just deallocated one...
Therefore in the re-implementation of the algorithm from UnTRIM to Telemac (module streamline.f) the appropriate fields are allocated once and forever by the -first- call to a fixed length, see HEAPCHAR, SENDCHAR and RECVCHAR, even with SAVE directive, there were no problems anymore that time.
Therefore I would support Jean-Michel in his suspicion that if you do not use any Fortran compiler that manages the memory in a wasteful way, or someone in the time between 2008 and now has built something always reallocating memory there, the possible source could be your implementation of MPI.
The first thing I would suggest to check would be the behaviour of MPI_ALLTOALL and -especially- MPI_ALLTOALLV. They are applied only in the parallel version of characteristics and are heavy-duty MPI routines in which all ranks exchange data with all other ranks in one stage. For their usage -- bief's streamline.f, look into the SUBROUTINE GLOB_CHAR_COMM.
Please note, that in MPI_ALLTOALLV fields of objects (records) of the type CHARAC_TYPE are exchanged, which adds another stage of complexity. In all calls the (variable) length of buffers is passed although the Fortran buffers are allocated to a fixed length. If MPI or the compiler with which it has been compiled wastes memory for -another- internal buffers being constantly re-allocated, then... You know what I mean.
There could be also a possibility to check if the memory for MPI internal buffers is large enough, for this please have a look into your MPI manual, something like MPI_BUFFER_MAX or similar.
Best regards,
jaj