Welcome, Guest
Username: Password: Remember me
  • Page:
  • 1
  • 2

TOPIC: telemac3d with dwater - problems

telemac3d with dwater - problems - extra info 8 years 1 month ago #24037

  • j_floyd
  • j_floyd's Avatar
So after 2-3 months setting up telemac and querying the method of interfacing dwater I am being told BAD luck it may not work.

So I just throw all this work in the bin and find something that works. Not happy.

The comment on a discharge during the sumation period means that the approach has not been implemented properly. Its the same as running a tidal run with 3 hour outputs you dont see a tide - adjust the method for the problem.

However the claim through documentation (or lack of) that a dwater interface exists implies it works.

Not sure where to go from here within the timeframe we have for this job.
The administrator has disabled public write access.

telemac3d with dwater - large difference between single and multi proc 8 years 1 month ago #24039

  • j_floyd
  • j_floyd's Avatar
I have possibly been climbing the wrong tree.

I have gone back to my particular modelling job, which has been running across mpi processes.

I tried a run with the extra print outs from the tel4del function that i had created with the 'simplified' examples run up to now.

This gave me very large mass errors at each time step at a particular node, up to 10% of the node volume. Out of interest i ran the very short (2 hours simulation) on a single processor and the mass errors returned back to expected 1e-5. No other changes had been made to the code or the cas file (except comment out the PARALLEL=xx).

This suggests that there is a possible error in the segment flux calculations that is not working under MPI - possible processor transfer that hasnt happened. I expect segment fluxes could be across processor boundaries.

How can I easily determine whether a node or segment flux occurs on an edge between processors?

Thankyou
The administrator has disabled public write access.

telemac3d with dwater - large difference between single and multi proc 8 years 1 month ago #24040

  • c.coulet
  • c.coulet's Avatar
  • OFFLINE
  • Moderator
  • Posts: 3722
  • Thank you received: 1031
Hi
to investigate more in detail this kind of problem, you could find in the parallel run the the local node number which correspond to the global node number you found.
In the temporary directory, you could find the partitioned geometry file and then check the position of this node...

Hope this helps

Regards

PS: this is somehow strange as the parallel and sequential run are compared in some test case and even the reproducibility is not fully ensure, the difference are limited to the truncation error...
Christophe
The administrator has disabled public write access.

telemac3d with dwater - large difference between single and multi proc 8 years 1 month ago #24047

  • j_floyd
  • j_floyd's Avatar
But are any of the tests looking at dwater interface.

From the code the inter-cell fluxes are specially calculated for dwater.

Cheers
The administrator has disabled public write access.

telemac3d with dwater - large difference between single and multi proc 8 years 1 month ago #24048

  • j_floyd
  • j_floyd's Avatar
It appears that on closer inspection gredelseg actually sums values from across processors.

Not really sure how this works as one processor sees 6 segments while another sees 8 for a global node I looked at, and that the vertical velocity is calculated to maintain continuity. I can see that these calcs are all addative but shouldnt all nodes see the same number of segments? regardless of the number of processors that contain it. Just the segment contribution may be a part contribution if on a processor boundary? Hence local processor mass balance will be incomplete as indicated in my last post.

Food for thought
The administrator has disabled public write access.
  • Page:
  • 1
  • 2
Moderators: pham

The open TELEMAC-MASCARET template for Joomla!2.5, the HTML 4 version.