Hi all,
I decided to try using C-VSM instead of HR-VSM for my model (bedload only, 10 classes of sediment, 5 layers below the active layer), but I must admit I don't really know what I'm doing. So far, the only thing I've done is activate C-VSM using the VERTICAL GRAIN SORTING MODEL = 1 keyword, without setting anything else in the cas file.
The problem I have appears to be twofold for now. Firstly, there is a "Depth Synchro Error" that appears for some points (but not all) and as I write this, I
think I'm figuring out what's going on. Essentially, I have some areas in my model where I'm using USER_BED_INIT to change active layer and/or subsequent layer depth to 0. m to make them non-erodible. It would therefore seem that C-VSM expects the bottom of the model at these points to be 0.15 m to 0.18 lower (in my case) than it actually is, based on data obtained before USER_BED_INIT has run. Therefore, these lines appear (printed by lines 139-142 of CVSP_INIT_GAIA):
Depth Synchro Error for Point J: 545
219.096367961743 218.946367961743 0.120000000000000
3.000000000000114E-002
Depth Synchro Error for Point J: 546
219.098026275772 218.948026275772 0.120000000000000
3.000000000000114E-002
There is a consistent difference of 0.15 m (or 0.12+0.03) between the two points.
Note that this is only a small sample of what is printed out. The other possibility is that the model is going through points from 1 to whatever (546 in this case), and finding a Δz of 0.15 m (which corresponds to 5 layers X 0.03 m per layer
below the active layer) between the bottom of the active layer and the bottom of the model.
Secondly, once these points have been printed out, the code from CVSP_CHECK_ANYTHING_GAIA runs and this is printed:
----------------------------------------------
CVSP Checking Anything.... EXPENSIVE DEBUGGING
---------------------------------------------
READ_MESH_INFO: TITLE= Ph0_063mm
NUMBER OF ELEMENTS: 1000
NUMBER OF POINTS: 546
TYPE OF ELEMENT: TRIANGLE
TYPE OF BND ELEMENT: POINT
DOUBLE PRECISION FORMAT (R8)
MXPTEL (BIEF) : MAXIMUM NUMBER OF ELEMENTS AROUND A POINT: 8
MAXIMUM NUMBER OF POINTS AROUND A POINT: 9
(GLOBAL MESH)
SEGBOR (BIEF) : NUMBER OF BOUNDARY SEGMENTS = 90
INCLUDING THOSE DUE TO DOMAIN DECOMPOSITION
slurmstepd: error: Detected 3 oom-kill event(s) in StepId=16758029.0. Some of your processes may have been killed by the cgroup out-of-memory handler.
While "expensive debugging" made me laugh, this made my model run out of memory on the cluster I'm using, so that made me not laugh… I'm not sure what's going on here.
Essentially, I
believe my problem boils down to the use of USER_BED_INIT to change layer depths after C-VSM first reads them in, but before C-VSM actually runs, hence its confusion. However, I really don't know, and it could well be that 1) I'm missing other parameters for C-VSM or 2) C-VSM is not adapted to what I am doing. I don't need to see stratification, I just want the computed values to be a little more accurate if possible.
Thanks for any feedback you might have,
André Renault