I'd like to share some settings I've found to control the timestep size in order to limit runtimes. For context, I'm primarily using shell elements currently. The first step is to mesh and run the simulation and as soon as it has written one "animation file" after initializing, then click load partial solution and cancel the solver. Then look in the "element time step" result and use the "show min/max" tool in mecway to find the element with the smallest timestep. Delete this element and move the remaining nodes to close the hole by dragging them on top of each other and accepting the "merge nodes" pop-up window. Some times you may also want to use the "change element shape" to convert a quad element to two tri-elements and delete one, so you can move a node of the remaining element to close the hole created by deleting the element that had the small timestep. Then run this again to check the next smallest timestep element. In my experience, it is often one or two elements out of 150.000 that reduce the timestep by a factor of 5-10. Removing these elements and closing the holes will not really introduce any errors into your simulation.
The next thing you can do is to add the following to the custom engine input field:
/DT/shell/CST/
0.9 3.2e-8
This checks if any element has an element timestep of less than 3,2e-8 and if it has, then it changes the element formulation from large-strain to small-strain formulation. I use this to make sure that a single deformed element doesn't suddenly cause a massive drop in timestep size partway through a simulation. This does introduce some error in the calculation as it affects the stiffness of the element. It is also possible to delete this element by changing CST for DEL but I think I have a bit of bad experience with simulations diverging after elements has been deleted due to the material model, so I haven't tried this setting. If the troublesome element is identified and is on a part that isn't iterated on, then it can be solved by manually editing the mesh by deleting the element and closing the hole by enlarging the surrounding elements as previously described.
Another way of increasing the timestep size is with mass scaling. There's several ways. Classic mass scaling simply adds mass to the model, which increases the kinetic energy, so that will mess with any dynamic simulation like what I am doing here. Then there's a thing called Advanced Mass Scaling (AMS) which you can read about in the two links in the bottom of this post. It augments the mass matrix in a way so the total mass is constant, so that the translational kinetic energy is unaffected and so that only the high frequency vibrations are affected. Thus, it is well suited for quasi-static, low- and medium-speed events and less suited for high-speed events. To enable AMS for all elements in the model, we add the following to the custom engine input:
# Timestep control by advanced mass scaling
/DT/AMS/
0.67 1e-6
# end of timestep control
and we also add the following to the model definition:
#
/AMS/
#
This adds the advanced mass scaling to keep the timestep at 1e-6s in this case. The AMS should only be used with caution to increase the timestep by a factor of 2-5 times the elemental timestep size so the first mentioned methods should still be used to keep the elemental timestep sizes as high as possible so the AMS can be used to raise the timestep size even further. AMS can be combined with /DT/shell/CST/. Use a bit of caution and read up on the tools before using them as it seems like misusing the mass scaling can cause instability or wrong results.
@Victor , Is it possible to change how Mecway runs the OpenRadioss executables? I want to test running it with single precision to reduce the runtime for the first part of the simulation when I'm going for a quick iteration through design modifications. The website on the link below states that -sp or _sp need to be added to the command line or name of the executable in order to invoke single precision but as I can't figure out where to add that, I suspect that it is somewhere internal in the Mecway code that it should be added.
The single precision starter and engine are separate executables. As you suspect, Mecway is hardcoded to run the regular double precision ones but you can rename them to trick it:
double precision: starter_win64.exe and engine_win64.exe single precision: starter_win64_sp.exe and engine_win64_sp.exe
Wow, that was really easy! The executables for single precision is in the downloaded OpenRadioss folder with that name, so the renaming is just in the options tab in Mecway to change which executable it points to.
The results of using extended single precision is almost identical to the results running double precision but the speed increase was not incredible at only 3-6% reduction in computation times. This might be because the majority of the computing time is spent on contacts in this problem, which may already be using single precision as default. In the Altair Radioss help page I linked above, it is stated that most computations are already single precision and that the default is to use double precision only for some calculations, so it seems like the calculations that are using DP by default is not the ones that are taking a lot of time in my case. Because of this, Radioss refers to the _sp versions of the executables as "extended single precision" as it just means that all of the calculations are SP instead of some of them being DP. If I run the simulation without any contacts, loads or boundary conditions, the speed increase is 30% from using single precision so this supports the assumption that a simulation with a lot of contacts benefit less from SP than one which has less contacts.
The problem here was 26 meshed CAD parts, 159935 elements, 156871 nodes, timestep 1,5e-7 s (using advanced mass scaling to increase from 5e-8 s) and all of the surfaces that are not secondary/slaves in a tied contact is a part of a self-contact. Shown below is 0,0103 seconds simulated time, so that is 68667 computed timesteps. It is clear that the results start out the same but diverge from each other but the difference doesn't continue to grow within this amount of simulated time. On the graph of energy absoption, which is the sum of force times distance, they are indistinguishable.
The blue line is the force on the impactor, the green is that force passed through a low-pass filter and the magenta line is a rolling average.
Regardless of the disappointing speed increase, the conclusion on this test is that the single precision can be used (for me, in this particular model) to speed up the computation time without a significant loss of accuracy. Thus, I can use this to iterate faster and then run the final simulation at double precision.
Seeing all the great OpenRadioss finds, I thought it would be a good time to share a nice trick. If you keep a folder structure of useful things - materials, pre-made parts, and useful custom commands, these can be saved as "command only" liml files and kept in a library for easy access, like this:
@JohnM nice trick! Did you find a way to save also custom meshing parameters for Gmsh? There are two sets of parameters, one for meshing conformal parts and the other to meshing with quads that could be great to save. Maybe could be saved at least as a dummy custom CCX command deck, and delete before launching the solver.
I found out how to make the general contact type 24 detect edge contact by adding the line "0 1 " (29 spaces. I don't know why the forum truncates it to one space) on line 4 in the contact definition in custom model definition. The sad part is that it is not very good. The model runs A LOT slower than if the contact is defined seperately with edges using a type 11. Also, the contact stiffness is linear whereas type 11 is nonlinear so it allows some penetration. The penetrations seems like they might be acceptable but the increase in runtime is not acceptable. I investigated this because I wanted to save myself the work of defining the edge contact explicitly but it seems like that is still the best way to do it.
A curious difference between type 11 and type 24 is that the penetration of the elements into one-another seem to create a mechanical interlocking effect that means that the part that buckles forces the other part to deflect and buckle whereas with the type 11, the buckling part would slide off of the non-buckling part which stays straight.
Here is the contact card that applies self-contact to the faces in the named selection no. 3 with edge contact:
#Contact interface as self-contact type 24
/INTER/TYPE24/1
1
3
0 1
# end of contact definition
Comments
I'd like to share some settings I've found to control the timestep size in order to limit runtimes. For context, I'm primarily using shell elements currently.
The first step is to mesh and run the simulation and as soon as it has written one "animation file" after initializing, then click load partial solution and cancel the solver.
Then look in the "element time step" result and use the "show min/max" tool in mecway to find the element with the smallest timestep. Delete this element and move the remaining nodes to close the hole by dragging them on top of each other and accepting the "merge nodes" pop-up window. Some times you may also want to use the "change element shape" to convert a quad element to two tri-elements and delete one, so you can move a node of the remaining element to close the hole created by deleting the element that had the small timestep.
Then run this again to check the next smallest timestep element. In my experience, it is often one or two elements out of 150.000 that reduce the timestep by a factor of 5-10.
Removing these elements and closing the holes will not really introduce any errors into your simulation.
The next thing you can do is to add the following to the custom engine input field: This checks if any element has an element timestep of less than 3,2e-8 and if it has, then it changes the element formulation from large-strain to small-strain formulation. I use this to make sure that a single deformed element doesn't suddenly cause a massive drop in timestep size partway through a simulation. This does introduce some error in the calculation as it affects the stiffness of the element. It is also possible to delete this element by changing CST for DEL but I think I have a bit of bad experience with simulations diverging after elements has been deleted due to the material model, so I haven't tried this setting.
If the troublesome element is identified and is on a part that isn't iterated on, then it can be solved by manually editing the mesh by deleting the element and closing the hole by enlarging the surrounding elements as previously described.
Another way of increasing the timestep size is with mass scaling. There's several ways. Classic mass scaling simply adds mass to the model, which increases the kinetic energy, so that will mess with any dynamic simulation like what I am doing here. Then there's a thing called Advanced Mass Scaling (AMS) which you can read about in the two links in the bottom of this post. It augments the mass matrix in a way so the total mass is constant, so that the translational kinetic energy is unaffected and so that only the high frequency vibrations are affected. Thus, it is well suited for quasi-static, low- and medium-speed events and less suited for high-speed events.
To enable AMS for all elements in the model, we add the following to the custom engine input: and we also add the following to the model definition: This adds the advanced mass scaling to keep the timestep at 1e-6s in this case. The AMS should only be used with caution to increase the timestep by a factor of 2-5 times the elemental timestep size so the first mentioned methods should still be used to keep the elemental timestep sizes as high as possible so the AMS can be used to raise the timestep size even further. AMS can be combined with /DT/shell/CST/.
Use a bit of caution and read up on the tools before using them as it seems like misusing the mass scaling can cause instability or wrong results.
https://insider.altairhyperworks.com/wp-content/uploads/2017/10/T-T-1252-Radioss-Advanced-Mass-Scaling.pdf
https://help.altair.com/hwsolvers/rad/topics/solvers/rad/auto_application_example_ams_r.htm
The website on the link below states that -sp or _sp need to be added to the command line or name of the executable in order to invoke single precision but as I can't figure out where to add that, I suspect that it is somewhere internal in the Mecway code that it should be added.
https://help.altair.com/hwsolvers/rad/topics/solvers/rad/rad_user_guide_intro_c.htm
The single precision starter and engine are separate executables. As you suspect, Mecway is hardcoded to run the regular double precision ones but you can rename them to trick it:
double precision: starter_win64.exe and engine_win64.exe
single precision: starter_win64_sp.exe and engine_win64_sp.exe
The executables for single precision is in the downloaded OpenRadioss folder with that name, so the renaming is just in the options tab in Mecway to change which executable it points to.
This might be because the majority of the computing time is spent on contacts in this problem, which may already be using single precision as default.
In the Altair Radioss help page I linked above, it is stated that most computations are already single precision and that the default is to use double precision only for some calculations, so it seems like the calculations that are using DP by default is not the ones that are taking a lot of time in my case.
Because of this, Radioss refers to the _sp versions of the executables as "extended single precision" as it just means that all of the calculations are SP instead of some of them being DP.
If I run the simulation without any contacts, loads or boundary conditions, the speed increase is 30% from using single precision so this supports the assumption that a simulation with a lot of contacts benefit less from SP than one which has less contacts.
The problem here was 26 meshed CAD parts, 159935 elements, 156871 nodes, timestep 1,5e-7 s (using advanced mass scaling to increase from 5e-8 s) and all of the surfaces that are not secondary/slaves in a tied contact is a part of a self-contact. Shown below is 0,0103 seconds simulated time, so that is 68667 computed timesteps. It is clear that the results start out the same but diverge from each other but the difference doesn't continue to grow within this amount of simulated time. On the graph of energy absoption, which is the sum of force times distance, they are indistinguishable.
The blue line is the force on the impactor, the green is that force passed through a low-pass filter and the magenta line is a rolling average.
Regardless of the disappointing speed increase, the conclusion on this test is that the single precision can be used (for me, in this particular model) to speed up the computation time without a significant loss of accuracy. Thus, I can use this to iterate faster and then run the final simulation at double precision.
to use these parameters on your real step files.
The sad part is that it is not very good. The model runs A LOT slower than if the contact is defined seperately with edges using a type 11. Also, the contact stiffness is linear whereas type 11 is nonlinear so it allows some penetration. The penetrations seems like they might be acceptable but the increase in runtime is not acceptable.
I investigated this because I wanted to save myself the work of defining the edge contact explicitly but it seems like that is still the best way to do it.
A curious difference between type 11 and type 24 is that the penetration of the elements into one-another seem to create a mechanical interlocking effect that means that the part that buckles forces the other part to deflect and buckle whereas with the type 11, the buckling part would slide off of the non-buckling part which stays straight.
Here is the contact card that applies self-contact to the faces in the named selection no. 3 with edge contact: