When using the new Compression Only Support, I think I understand the basic workings of the code. However, I'm having a hard time trying to rationalize how I need to specify the Convergence Tolerance. Is this purely a numerical analysis artifact or is there some real world interpretation? Purely guessing on my part, is it...
* the percentage of nodes in tension that shouldn't be.
* the percentage of nodes that have passed through to the inside of the hex element that shouldn't
or what?
Watching my analysis run, I see that the calculated factor "generally" gets smaller, but some times I see it jump back up... some times by an order of magnitude.
Searches on the Internet (so far) didn't yield any better understanding.
Comments
The tolerance value is roughly the relative error in the displacements of all nodes. The convergence test with compression only support uses a displacement convergence criterion where it is considered to be converged when
||delta displacement|| / ||total displacement|| <= tolerance
||..|| is the Euclidian norm.
delta displacement is the vector of the change in displacement calculated at the last iteration for all DOFs in the model.
total displacement is the sum of all previous delta displacements and is an approximation to the actual displacement.
So when the displacements no longer change much, it stops iterating. The smaller the tolerance, the more accurate the solution but also the longer the solving time. The calculated error value can jump around as contacts open and close, sometimes causing large changes in displacement but it should eventually tend toward zero for a correctly converging solution.
Thank you for your quick reply. In the old days, we never went non-linear. So, your explanation is definitely helping me get there. Am I correct in assuming that this summation of “all”… is all the nodes that make up the contact constraint or is it all the nodes in the entire model ? The reason I ask… looking at the model at…
http://mecway.com/forum/discussion/comment/1#Comment_1
…”all” the displacements of the relatively few nodes that are actually “freed” because they were in tension is quite a few orders of magnitude smaller than “all” the displacements of the entire model. I would feel quite comfortable with my selection of 0.0001 if the summation is over just the contact nodes, but would feel compelled to adjust it smaller if it was relative to the entire model.
Am I thinking about this right?
Typically the error decrease gradually for several iterations then rapidly drops to effectively zero (10^-12 or so) as all contacts end up in their final correct state. If it's slowly making it's way to 0.00001 without any sudden drop, that suggests it might not be finished yet. But if it's stuck there then that might be the best it can do. I suspect this could be what's happening if a small change at the hole causes a large displacement at the tip.
Would you mind emailing me the model? I think it won't upload to the forum easily. Clear the solution and zip it to reduce file size.
No, there's currently no way to pause the solve sorry.
Just for grins and see if it starts converging, I ran a series of tests just changing the Convergence tolerance. As you mention, it did finally and suddenly drop. Here are some stats...
Tolerance Iterations Time Converge
0.1 5 0:10:50 0.08221528
0.01 26 0:43:13 0.00869024
0.001 65 1:26:44 0.00086361
0.0001 116 2:13:06 8.78E-05
0.00001 n/a
0.000001 141 2:20:19 9.32E-09
... and some pictures. In this case, it looks (qualitatively) like I need to go to the full tolerance.
Model is on the way.
You can also turn on the constraint symbols with the glasses button in the tool bar to see which nodes ended up in compression (red) and which were released (grey). That only works while the file is still open after solving, so it might be too late for these.
From the results above, I have made some changes to the structure and added more refinement to the model. It has…
• 231,600 Nodes
• 1,200,000 DOF
• 90,000 Quad8, Tri6, Hex20 and Wedge15 elements
• 1E-6 Convergence Tolerance
It is on iteration 125 has been running for the last 32 hours! For the last 10 of those hours it has been hovering just under 2E-4. If I could look at the results and see the bearing surface above, I could make a qualitative judgment and stop or continue the analysis. It is a rather depressing thought to stop the current analysis, throw away all the work up till now, reset the convergence tolerance to 2E-4, and restart the analysis and wait another 24 hour for the same results it has right now.
I'm still not happy with the performance of the iterative solution. Somehow, I think if you could just look at the intermediate results and choose which constraints to release, you'd need far fewer iterations. The solver might be "unzipping" them almost one by one in this case.