Welcome, Guest
Username: Password: Remember me

TOPIC: partel_para.exe configuration not working

partel_para.exe configuration not working 1 year 8 months ago #42140

  • N_Strahl
  • N_Strahl's Avatar
  • OFFLINE
  • Fresh Boarder
  • Posts: 12
I wish to use partel_para.exe for my telemac2d simulation runs. I have successfully compiled all modules necessary for this. However, when I run a case I am getting errors. Below is my config file.

[Configurations]
configs:	wing64mpi
[general] 
version:	v8p4r0
language:	2
modules:	bief gretel hermes parallel partel special stbtel telemac2d
sfx_zip:	.zip
sfx_lib:	.lib
sfx_obj:	.o
sfx_mod:	.mod
sfx_exe:	.exe
val_root:	<root>examples
val_rank:	all
cmd_obj_c:	C:\msys64\mingw64\bin\gcc.exe -c <srcName> -o <objName>
[wing64mpi]
par_cmdexec: mpiexec.exe <config>\partel_para.exe < PARTEL.PAR >> partel_T2DGEO.log
mpi_cmdexec: mpiexec.exe -wdir <wdir> -n <ncsize> <exename>
mpi_hosts:
cmd_obj:    C:\msys64\mingw64\bin\gfortran.exe -c -cpp -O3 -DHAVE_MPI -DHAVE_PARMETIS -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz <mods> <incs> <f95name>
cmd_lib:	C:\msys64\mingw64\bin\ar.exe cru <libname> <objs>
cmd_exe:    C:\msys64\mingw64\bin\gfortran.exe -fconvert=big-endian -frecord-marker=4 -fallow-invalid-boz -v -lm -o <exename> <objs> <libs>
mods_all:	-I <config>
incs_all:	-I C:\msys64\mingw64\include
libs_all:	C:\msys64\mingw64\lib\libparmetis.a C:\msys64\mingw64\lib\libmetis.a C:\msys64\mingw64\lib\libmsmpi.dll.a

I am getting the error: FILE DOES NOT EXIST: PARTEL.PAR
I noticed there is a partel_T2DGEO.par file in my working dir so I changed the file name in partel_para.f to partel_T2DGEO.par but the execution also failed. The MPI workers panic and shut down.

par_cmdexec should ideally be mpiexec.exe -wdir <wdir> -n <ncsize> <config>\partel_para.exe < partel_T2DGEO.par >> partel_T2DGEO.log

but <wdir> is not filled with its corresponding entry and neither is <ncsize>. Is there a better approach to get partel_para to work?

I am using PARALLEL_PROCESSORS=6 and PARTITIONING TOOL=PARMETIS in the case file. I am running the t2d_pluie example.
The administrator has disabled public write access.

partel_para.exe configuration not working 1 year 7 months ago #42361

  • borisb
  • borisb's Avatar
  • OFFLINE
  • Admin
  • Posts: 128
  • Thank you received: 64
Hello,

You should not use partel_para because it has not been tested for a while, I will remove it from the repository because it should not be there anymore. We all use the sequential version of partel which works very well by the way.
The administrator has disabled public write access.

partel_para.exe configuration not working 4 months 1 day ago #45221

  • tomsail
  • tomsail's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 43
  • Thank you received: 17
Hi Boris,

I'd be keen to know if there any plan to support PARMETIS.

We express with our team a great interest in having a parallel partitioner because this becomes by far the bottleneck for our runs.

specs:
3.5M nodes on 128 cores
A week of forecast or 1/3 months seasonal forecast

More info on the specs and the bottleneck here:
gist.github.com/tomsail/e524ec263dbae4bdf7be92b5160a56df

Have a look at the 2 options I defined to fix this issue and please respond if you find anything relevant to add.
The administrator has disabled public write access.

partel_para.exe configuration not working 4 months 19 hours ago #45223

  • greeve
  • greeve's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 35
  • Thank you received: 4
Hi all,

Sometime back we looked into Partel Optimisation. What we found was that the “partel” partitioning tool processes each partition sequentially and re-reads the entire input dataset from disk in every instance. This leads to very long runtimes when the overall number of partitions is large (hundreds). A straightforward optimisation in “parres.f” allocates a very large array that holds the entire input data file in RAM; it is populated when the first partition is processed. This approach minimises code changes (highlighted in red):
[…]
DOUBLE PRECISION,ALLOCATABLE :: VAL(:),VAL_INP(:)
! VERY LARGE MEMORY BUFFER TO AVOID RE-READING INPUT DATA
DOUBLE PRECISION,ALLOCATABLE :: INPUTBUF(:,:,:)

! GEOMETRY INFORMATION
INTEGER NPOIN_GEO,TYP_ELM_GEO,NELEM_GEO,NPTFR_GEO,NPTIR_GEO,
& NDP_GEO,NPLAN_GEO
[…]
ALLOCATE(VAL(NPOIN_P),STAT=IERR)
CALL CHECK_ALLOCATE(IERR,'PARRES:VAL')

! FIRST PARTITION: ALLOCATE BUFFER FOR INPUT DATA
IF ( IPART .EQ. 1 ) THEN
WRITE(LU,*) 'ALLOCATING LARGE MEMORY BUFFER WITH',
& NPOIN_INP*NVAR_INP*NTIMESTEP,
& 'ELEMENTS'
ALLOCATE(INPUTBUF(NPOIN_INP, NVAR_INP, NTIMESTEP), STAT=IERR)
CALL CHECK_ALLOCATE(IERR,'PARRES:INPUTBUF')
END IF


! LOOPING ON THE TIMESTEP AND VARIABLE OF INP FILE
DO ITIME=1,NTIMESTEP
CALL GET_DATA_TIME(INPFORMAT,NINP,ITIME-1,TIMES,IERR)
CALL CHECK_CALL(IERR,'PARTEL:GET_DATA_TIME:NINP')
WRITE(LU,*) ' -- WRITING TIMESTEP',ITIME-1,' AT',REAL(TIMES)
! Loop on all the variables
DO IVAR=1,NVAR_INP

! POPULATE BUFFER
IF ( IPART .EQ. 1 ) THEN
CALL GET_DATA_VALUE(INPFORMAT,NINP,ITIME-1,
& VARLIST(IVAR)(1:16),VAL_INP,
& NPOIN_INP,IERR)
INPUTBUF(:, IVAR, ITIME) = VAL_INP(:)
ENDIF

! GETTING THE VALUE NEEDED FOR THAT PARTITION
IF(NPLAN_INP.GT.1) THEN
DO I=1,NPOIN_P
VAL(I) = INPUTBUF(KNOLG3D(I), IVAR, ITIME)
ENDDO
ELSE
DO I=1,NPOIN_P
VAL(I) = INPUTBUF(KNOLG(I), IVAR, ITIME)
ENDDO
ENDIF


CALL ADD_DATA(INPFORMAT,NINP_PAR,VARLIST(IVAR),TIMES,
& ITIME-1,IVAR.EQ.1,VAL,NPOIN_P,IERR)
CALL CHECK_CALL(IERR,'PARRES:ADD_DATA:NINP_PAR')
ENDDO
ENDDO
The runtime machine will evidently need to have sufficiently large memory in case of high-resolution meteorology files with long timeseries, but the speedup is significant.
For example, running partel in its original and optimised versions to split an 1.4 GiB atmospheric data file holding 48 timesteps with temperature, pressure, and two wind component fields on 1,842,047 mesh nodes and 3,614,079 cells (triangles) into 200 partitions resulted in the following runtimes:
• Original version: 1259s
• Optimised version: 20s (x63 speed-up)
Both versions produced bit-identical output files. The optimised version required around 8 Bytes x 1842047 mesh nodes x 4 fields x 48 timesteps = 2.6 GiB of memory - around twice the size of the atmospheric data file, as the latter stores data in single precision format, while partel operates in double precision internally. A more elaborate implementation of the data caching mechanism would avoid this doubling of RAM requirements, at the expense of having to implement more significant code changes.
Attachments:
The administrator has disabled public write access.

partel_para.exe configuration not working 4 months 5 minutes ago #45233

  • tomsail
  • tomsail's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 43
  • Thank you received: 17
Thanks greeve for this feedback.

The optimisation time is impressive and is a good workaround up to a certain limit:
1 - because parres.f cannot not be added to the user_fortran folder, and recompiled "on the fly", like the other PARTEL file, correct me if I'm wrong. So TELEMAC has to be recompiled. It might be an issue for people that don't have access to source code (in corporate HPCs for example). We use opentelemac conda environment, so we avoid that problem actually.

2 - I calculated that for our 3km global, the load in RAM would be:

8 Bytes x 3.5M mesh nodes x 3 fields x 8766 timesteps = 740GB GiB of memory (also around twice the 270GB input files size).

We can load it in our HPC spinning up 2 nodes, but one needs to do the calculation beforehand.
The administrator has disabled public write access.

partel_para.exe configuration not working 3 months 4 weeks ago #45234

  • tomsail
  • tomsail's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 43
  • Thank you received: 17
ideally, I think that the best way to implement this in parallel would be at the python level.

We have built an xarray-selafin backend reader that can load lazily (i.e. just the amount needed) the data on RAM.

Like Sebastien commented on the benchmark, gist.github.com/tomsail/e524ec263dbae4bdf7be92b5160a56df ,
the idea would be use the info from the splitted files generated by:
telemac2d.py telemac2d.cas --split
and interpolate simultaneously the atm. forcing on every domain using python multiprocess routines.

I'll get to that at some point this year.
The administrator has disabled public write access.

partel_para.exe configuration not working 3 months 4 weeks ago #45235

  • greeve
  • greeve's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 35
  • Thank you received: 4
Yes, you are correct. The code will need to be re-compiled from scratch once you replace parres.f with the updated text.

A parallel python option would be a great upgrade in the future. At present we run our model using a CYCL8 workflow, which allows us to distribute model tasks across different HPC nodes, leveraging high memory nodes as needed. This workflow significantly speeds up the process, provided there are no long queuing times. The workflow essentially splits the code into tasks, enabling us to download (1), interpolate (2), and partition (3) the next cycle point while the first cycle is running. Breaking into individual tasks also allows for using the high-memory merge option --gretel-method=2

For example:
1. Download meteo:
2. Interpolate binary meteo data:
3. Partition:
runcode.py 'telemac2d' --ncsize=## --workdirectory=### --use-link --split ###.cas
4. Run:
runcode.py 'telemac2d' --ncsize=## --workdirectory=### --run ###.cas
5. Merge:
runcode.py 'telemac2d' --ncsize=## --workdirectory=### --merge --gretel-method=2 --tmpdirectory ###.cas
6. Post-processing:
The administrator has disabled public write access.
The following user(s) said Thank You: tomsail

partel_para.exe configuration not working 2 months 3 weeks ago #45460

  • tomsail
  • tomsail's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 43
  • Thank you received: 17
Hi greeve,

I'm testing your parres.f implementation to benchmark computing times on a one year global surge hindcast for a 3.5M nodes mesh.

There might be something I'm missing, (maybe in the configuration file?) in order to launch PARTEL in parallel.

Because even with your implementation, PARTEL still get executed sequentially
The administrator has disabled public write access.

partel_para.exe configuration not working 2 months 2 weeks ago #45509

  • greeve
  • greeve's Avatar
  • OFFLINE
  • Junior Boarder
  • Posts: 35
  • Thank you received: 4
Hi Tomsail,

Apologies if my previous message was unclear. The code is not parallel; rather, parres.f has been optimized to speed up the standard writing of partitioned binary atmospheric files. As mentioned earlier, we found that the default “partel” partitioning process using parres.f was re-reading the entire input dataset from disk for each partition, which significantly slowed down the partitioning.
The administrator has disabled public write access.
Moderators: borisb

The open TELEMAC-MASCARET template for Joomla!2.5, the HTML 4 version.