Welcome, Guest
Username: Password: Remember me

TOPIC: Tuning of head loss coefficients

Tuning of head loss coefficients 3 years 11 months ago #37253

  • erwanC
  • erwanC's Avatar
  • OFFLINE
  • Fresh Boarder
  • Posts: 13
Hi all,

Thanks for the quality of answers here !

I face the same problem as @Binguo :
- I did the previous modifications (in the steering file + in a telemac2d.F copied and modified in a user_fortran).
- Telemac v8p1r2 run without problem in scalar and in parallel mode with only 1 processor.
- BUT there is a problem in parallel mode with more than 2 processor and more... The run is completed for each sous-model, but fail to merge them...

Does anybody have an idea to solve this ?

Please find attached the end of the log
OHerror.jpg


Thanks a lot
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37255

  • erwanC
  • erwanC's Avatar
  • OFFLINE
  • Fresh Boarder
  • Posts: 13
Hi all,
First of all, thanks for the quality of your support.

I'm in the same situation as @Christof and @Binguo : I'm looking to extract culvert's hydrogram under telemac2d v8p1r2.

I did the following (accordingly to previous posts):
- modified steering file to add e private variable;
- copied a telemac2d.F with the model (local user_fortran file declared in the steering file) and modify it.

As a result :
- in scalar mode and parrallel mode with only 1 processor : the model run correctly (and the culvert discharge is writen) ;
- in parallel mode with multiple processors, the model run correctly BUT after MPI tasks it fail to merge the different sous-modèles...
- if I force after that failure the merging, it works and the data is correctly writen... I used "runcode.py --merge -w C:\modeldirectory\cas.txt_2020-12-02-11h53min41s telemac2d C:\modeldirectory\cas.txt_2020-12-02-11h53min41s\T2DCAS".

Does anyone have any hint?
Thanks

Please find the log attached

OHerror_2020-12-02.jpg
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37256

  • c.coulet
  • c.coulet's Avatar
  • OFFLINE
  • Moderator
  • Posts: 3722
  • Thank you received: 1031
Hi
When the merge fail, you could have a look into the gretel.log file to see were the problem is...
Christophe
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37258

  • erwanC
  • erwanC's Avatar
  • OFFLINE
  • Fresh Boarder
  • Posts: 13
Hi,
Thank you
Where can i see this log?

- In the main folder, I have only the file *.sortie (ex : cas.txt_2020-12-02-11h14min07s.sortie) using the -s option, which is exactly the log above.
- In the working folder, among all the computed files, there is partels (PARTEL/PARRES) files (it seems no problem).

(and sorry for the double post)
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37259

  • c.coulet
  • c.coulet's Avatar
  • OFFLINE
  • Moderator
  • Posts: 3722
  • Thank you received: 1031
It's in the working folder
you should find gretel files
Christophe
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37325

  • erwanC
  • erwanC's Avatar
  • OFFLINE
  • Fresh Boarder
  • Posts: 13
Hello M. Coulet,
I investigate more and found that the error seems to trigger before the step where the gretel file is writen.
So I don't have it.

And using afterward the "runcode.py --merge -w" command is OK (the gretel file is creating during that process, along the selafin.res).

So I can work like that, but would like to be sure everythink is fine and cannot understand the origin of this error..
Find attached a list of all the files in my workdir (for 2 processors) at the error.

Thanks for your help
Attachments:
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37326

  • c.coulet
  • c.coulet's Avatar
  • OFFLINE
  • Moderator
  • Posts: 3722
  • Thank you received: 1031
Hi
this means there is a problem at the end of the run which is not fully achieved. The results could be OK but the program crash somewhere...
This could be linked to user modification as it seems you use it or another problem which is impossible to find without the model...
You could test if everything goes well with one of the numerous test case of Telemac.
Regards
Christophe
The administrator has disabled public write access.
The following user(s) said Thank You: erwanC

Tuning of head loss coefficients 3 years 11 months ago #37333

  • erwanC
  • erwanC's Avatar
  • OFFLINE
  • Fresh Boarder
  • Posts: 13
Hi,
I tried with the bridge exemple, and also tried to run my model on a clean Telemac2D installation on another computer.
I face the same error :
- everything work perfectly without the user_fortran (no culvert discharge), in scalar or parallel mode.
- everything work perfectly with the user_fortran (culvert discharge is stored in the .res file) in scalar mode or in parallel mode with only 1 processor.
- same error with the user_fortran in parallel mode with 2 or more processors.

The error take place after :
"END OF TIME LOOP
EXITING MPI"

And before (for a normal run) :
" *************************************
* END OF MEMORY ORGANIZATION: *
*************************************
CORRECT END OF RUN
ELAPSE TIME :
2 MINUTES
43 SECONDS
... merging separated result files
... handling result files
moving: r2d_bridge.slf
copying: casparallfortran.cas_2020-12-09-15h56min12s.sortie
... deleting working dir

My work is done"


May I ask you (or someone else) to look into my steering file and the user_fotran folder (with only the modified telemac2d.F) ? the modification take place at line 1134 (I simply add lines).
Please find them attached.

Do you have an idea if the user_fotran modification became incorrect under v8p1r2 ?

Regards
Erwan
Attachments:
The administrator has disabled public write access.

Tuning of head loss coefficients 3 years 11 months ago #37339

  • c.coulet
  • c.coulet's Avatar
  • OFFLINE
  • Moderator
  • Posts: 3722
  • Thank you received: 1031
Hi

I think your modification is not parallel compatible. If the entry or output of the culvert is not in a subdomain, the value should be 0...
I'm quite surprise there was no crash before the end of the computation as you probably generate a segmentation fault when you try to assign a value at prive(0)...

Look into buse to use the same kind of test for parallel...

Regards
Christophe
The administrator has disabled public write access.
The following user(s) said Thank You: erwanC
Moderators: pham

The open TELEMAC-MASCARET template for Joomla!2.5, the HTML 4 version.