Hello,
I wrote a very simple Lagrangian transport module directly within the subroutine telemac3d.f (I intend to move it to a separate subroutine later).
The code is working perfectly in scalar.
To make the code work in parallel I used the function p_dsum.f (from the parallel folder), which calls the MPI_ALLREDUCE function. My
The reason I am using this function is because if the Lagrangian particle is within the sub-domain, its properties are calculated, otherwise, the processor core responsible for that sub-domain sets its properties to zero. Thus, when it sums a series of zeros with the actual value of the properties of the particle, it should, in principle, have the actual value of said property.
My problem is: somewhere in the code, when the program calls p_dsum, it does not advance nor stops, as if it entered and infinite loop in MPI_ALLREDUCE (I know it is in this function because I added a few PRINT* statements to know where the execution is).
My questions are: 1 - Why is this happening?
And: 2 - Is it correct to do what I am doing? If not, what is the correct way to parallelize my code?
I am attaching a working example of my code.
(In the .zip file there are two .m files. fheadr.m and fstepr.m. They work exactly like the telheadr.m and telstepr.m functions, but are used to read the .f3d file that has the results of the Lagrangian transport module).
If I missed any information that may be helpful to solve my problem, please ask.
Regards
Phelype