How many nodes can you work with?

Hi,

I don't have a high power computer and tend to get bogged down around 80k-100k nodes. I'm curious what other Mecway users are limited to in terms of node count. I'm using Windows 7 64-bit with 4GB of memory. However, my processor is very crappy. Plan on upgrading my computer in the future and just curious as to what I can expect. Are 200k, 300k, and so on, node models something others work with on a regular basis?
«1

Comments

  • Hi prop_design,

    Using netgen I generated a monster 2.3 million nodes on my first attempt from a 3D cad import (it was left overnight on a 16GB Z420 workstation). I was able to load it into Mecway view and rotate around. Unfortunately this model had too many errors in the elements to solve. I had to make a coarser mess and a lot of fine tuning to get something sensible to work with.
  • Thanks J_Marc, I sometimes have run into the issue of malformed elements as well. It took me a fair amount of trial and error to figure out Netgen. Even then, I don't truly understand the meaning of many of the settings. Some of the things I have found useful in Netgen v6 Beta is the following:

    - On the Meshing Options General Tab select very fine mesh granularity, check second order elements. If a surface model, also select Quad Dominated
    - On the Meshing Options Mesh Size Tab change min mesh size to 1 if 0 fails, change elements per edge to 5, decrease max mesh-size to get even more elements

    I am not really sure what min mesh size of 0 and 1 mean. I'm also not really sure what max mesh size means. But a value of 0 for min mesh size can cause the program to crash. Taking guesses at reducing max mesh size value will increase the mesh density.

    That's great you can solve such a large model. Recently I have been having issues with even small surface models. So I may just have computer issues, not sure.
  • Hi prop_design,
    There is a fair amount of lag with 2.3 million node and it is not workable in my opinion. 500,000 nodes with 16Gb Z420 is okay if you don't mind running the solver overnight. In my case this was an accident because I didn't set the correct values for meshing. I am happy to use about 200,000 nodes because it allows me to try different values on the problem.
  • thanks for the info, that is helpful. i don't think we are too far off then on node count. no, i don't want it to run overnight, lol. around 10 mins is about all i can tolerate.
  • I have worked with about 90 000 nodes max. in Mecway 1.1. My previous laptop (8GB memory, no GPU) struggled a bit on navigating the model, but OK on solving in a few minutes. I now have a laptop with 16 GB, good processors and a GPU. Navigating is better, but takes a LONG time trying to change element shapes. I am hoping to make the model larger, as I need better refinement. Not sure if mesh refinement or more nodes per element is the way to go. I have not persuaded Mecway to make use of the GPU yet. I am not sure if it will use it, or if it will help. I gather later versions of Mecway are better with graphics, so I might consider upgrading. I will post again when I try with more nodes. Any advice on using the GPU welcome (GeForce GT 750M).
  • edited October 2015
    Thanks Dave, seems like 100,000 nodes is the comfortable limit from what people are saying and my own experience. I don't think Mecway uses the GPU. I wouldn't hold your breath on that being implemented. Most people do not implement GPU support because it is not universal enough to warrant the time, effort, cost etc... The latest version of Mecway has made some graphic speed up improvements. Victor can probably send you a trial to see if it helps you any, before you purchase.

    P.S. I should clarify that I mean GPGPU support. Where the GPU is used to perform FEA as opposed to standard graphics operations. Given no one can ever agree on a standard because they want to be the only one to offer such a feature and the fact that everyone has different GPU processors it makes implementing GPGPU support not very worthwhile. Nvidia CUDA was the first and most prolific but I myself do not have a Nvidia card and probably never will. So even if it was implemented it wouldn't help. OpenCL is available and Intel and AMD support that, I'm not sure if Nvidia does. But unless everyone has software and hardware it is just a waste of developer's time. On the other hand almost everyone has an x86 capable processor.
  • As an update, using Mecway4 on a Dell XPS 8300 with Intel Core i5 and 16GB I've been comfortably working today with a model containing about 250k nodes and a relatively high connectivity. Model rotation in the graphic window is nearly real-time; some operations take a noticeable few seconds but it's not a big issue. Mine is a nonlinear (large-displacement) elastic problem; 5 steps with iteration to equilibrium at each took only about 38 min.

    Interestingly, I tried running it in CCX and benchmarked the results (very comparable); execution times weren't remarkably different.
  • Thanks, that's great to know.
  • VMHVMH
    edited January 2016
    rwhirley, did u try using more than one thread in CCX. It made a different in the runtime when I was testing a 325k nodes elastic material model, linear static analysis.
  • I should have mentioned it in this thread before - for large meshes, version 4's graphics are much faster than previously. So the 2015 comments about 100,000 nodes or so may not be applicable anymore. I just tried 1,000,000 nodes and it's usable but jerky and a some basic operations take a few seconds. More complex operations like changing element shape and solving should still be the same speed.

    Mecway's matrix solver is multithreaded and optimized for Intel CPUs so that stage of the solver, which dominates the time for large models, should be as fast as or faster than other software. The matrix assembly stage is still a bit slow.
  • how much memory did you have for 1,000,000 nodes
  • edited January 2016
    you're right victor, the earlier comments were for v3. perhaps you could give an idea of the node limits versus memory for v4. i would be curious to know how many nodes can be solved for 4gb, 6gb, 8gb, 12gb, and 16gb of memory. those are the typical values you see in a laptop.
  • Memory shouldn't be an issue for the graphics. I had 16GB though and that's also what I've solved 1,000,000 node models with.

    That's a good idea about a table, though it does depend on the connectivity of the mesh - a cube uses more memory than a slender rod. There is also automatic disk swapping so in theory, any amount of RAM should be OK but it can become much slower if it has to swap to disk. Also, I've sometimes seen it fail when doing that.
  • yeah good points. i was more interested in the solver than the graphics. but v3 did have the graphics issue. i am use to being able to turn off what you are calling disk swapping with ansys. i don't know if that is something that can be turned off in mecway or not. to make the table you would probably have to have a desktop computer with four 4gb memory modules. and run it at 4, 8, 12, and 16gb. that way it is with the same computer. so a bit of a pain but doable if you have the right kind of computer. if you had a 2gb module you could run the 6gb too. but not a big deal. the other four points should be enough to curve fit it. probably should be a linear curve but not really sure, kind of forgot.
  • just to clarify; if you can solve 1,000,000 nodes with 16gb of memory then i would expect you could solve 250,000 nodes with 4gb of memory. that would be a linear curve. but in a previous post a user said they solved 250,000 nodes with 16gb. thus the need for a more accurate curve. and you are right in that you want to stay in what ansys calls in-core rather than out of core. it will solve, but it will take much longer. a ssd drive will probably make it better than with an hdd but still in-core should be the fastest and what i'm more interested in knowing. i am assuming the graphics can handle everything.
  • and a solid cube with one fixed face would suffice as a model. you could do a modal analysis on it. say 12 modes, using the mecway solver of course. pre-stressed modal analysis with spin softening would be even better in my case.
  • I see what you mean. I'll try to put together something. You're right that it's not linear -with larger models taking disproportionately more memory and time.
  • hi victor,

    sorry for so many posts. i had a thought. i believe in ansys you could tell the solver how much memory you wanted it to have (say 4, 6, 8, 12, or 16gb of memory) as well as being able to turn in-core or out-of-core memory mode on. so if you had that ability you wouldn't have to actually remove memory modules from the computer (although that is easy on a desktop). perhaps you could add options like that in the settings of mecway. if you have in-core on and 4gb selected and your model becomes to big to solve, the program would stop and give you a message to reduce your model size.
  • Here's the result of some tests.

    - Computer: 16 GB RAM, USB 3 external hard disk drive, Intel Core i7 CPU
    - All models are static 3D with hex8 elements. Hex20 elements use more memory for the same number of nodes and similar mesh topology.
    - Larger models failed because there wasn't enough disk space for Windows' virtual memory paging file.
    - Disk use in the table is for the out-of-core files used by the matrix solver which were put on the external USB hard drive.
    - RAM in the table is the total memory used as shown by Task Manager. This is not very accurate and includes memory used by other applications.

    Nodes ...... Elements ..... Time ..... Disk use .. RAM
    550,000 ... 128x32x128 .. 95 min ... 29 GB ..... 7 GB
    280,000 ... 102x25x102 .. 18 min ... 11 GB ..... 4 GB
    140,000 ... 80x20x80 ...... 2.7 min .. 0 ............ 7 GB
    530,000 ... 512x1x512 .... 3.5 min .. 0 ............ 9 GB
    260,000 ... 362x1x362 .... 1.7 min .. 0 ............ 5 GB
    130,000 ... 256x1x256 .... 0.8 min .. 0 ............ 3 GB

    You can see that changing the mesh topology can alter the memory use and time by an order of magnitude even for the same number of nodes. A typical mesh would be somewhere between the cubiod and plate used in these tests.
  • Yeah the number of nodes are multiplied by the degrees of freedom to get the total number of equations. The boundary conditions play into that as well. So the nodes is only a rough correlation to the actual memory limit. But it works out pretty good. I usually use solid elements. I remember doing a bunch of tests with Mecway, in the past, and I couldn't really use the plate elements for my models. The solid elements should be 3x the number of nodes not including any boundary conditions. So if you had 100,000 nodes and no boundary conditions I would expect 300,000 equations.

    Thanks for taking the time to run these tests, it is helpful to know what people can solve and with how much memory.
  • Looking at your results for solid elements it looks like you are going out of core at around 200,000 nodes or 600,000 equations (roughly). You said you had 16gb of memory. So that is pretty close to what someone posted earlier, about 250,000 nodes with 16gb of memory. I believe I was doing around 100,000 nodes with 4gb of memory using v3 (which had the graphics slow down too). So it looks like 4x the memory gets 2.5x the model size at this point. But if anyone else finds differently, please post your findings here. For test purposes I would suggest solving a cube with one fixed face and solid elements. Just to keep model variations out of the mix. I have also found the solver type matters. Moreover, linear static will be different than modal and so on. Usually a block lanczos modal analysis will need the most memory to hold the model in core and keep the solve time down. So a modal analysis with pre-stress and spin softening would be a nice test for memory limits.
  • Hi,

    I put together a table comparing model size versus run time and memory use for Mecway, CCX 2.8.2 and 2.10 (kwip version) for parabolic elements. The first two models are probably more representative of an actual FEA model, the third one is just a square plate with an extra element layer or two between runs. Mecway results are for V4, but I reran a few with V5 and results are within a few seconds. V5 also seems to help a bit with the memory issue I had, I was able to run a 377k nodes tet model that crashed V4. I also had a bug with both versions of CCX where the program just crashes after the multithreaded part is done without giving any error message with large models (377k nodes for tets and 460k for bricks). Did someone else had this issue before?

    As for the results, they show that CCX 2.10 is much faster than 2.8.2, which was significantly slower than Mecway despite using 2 more threads. CCX is also more efficient with memory and keeps everything in core unlike Mecway which starts using disk cache when more than 16GB of ram is required.

    I hope this may be of some use to someone :)
  • that's a great chart. thanks. some people questioned why i request this info. the reason is i try to avoid going out of core in ansys when doing modal analysis. large models out of core were too much for the computer. so whenever i mesh i try to keep the number of nodes below that threshold. i'm not sure with mecway if it is the same as far as out of core and in core. with mecway i was just running into model size issues period. it either solved or it didn't solve kind of thing. or the visualization got too bogged down. so basically this info is to gage what is likely to happen before you actually solve. mecway v5 visualization is much better now.
  • edited June 2016
    That's a nice chart Simon. It really shows how much better kwip's CCX 2.10 is!

    It would be helpful to see your meshes because the topology has a huge effect on performance as shown in my table further up this thread. Two models took 4 minutes and 1.5 hours with the same number of nodes and the same element types. The only big difference was the shape of the model - single layer vs thick block.
  • edited June 2016
    The tet shaft is more in line with a real world analysis in terms of shape and number of boundary conditions. I made the plate one because I had some problems with the tet mesher getting model sizes that were relatively close together. The half-cylinder is probably only valid if someone likes very very fine meshes.
  • It looks like the original poster is no longer with us, but I have a bit more to add in case it's of interest.

    The steady state thermal model that I have been working with lately has 69261 elements (quadratic) and 284774 nodes. Graphics is a bit sluggish, but perfectly manageable in Mecway 6 & 7. There is a bit of a pause (a few seconds) when adding/editing loads, or hiding/showing parts of the mesh: the tree disappears then takes a few seconds to reappear. Solving is about 10 minutes with 10 iterations.

    I thought I should try to run the model at higher resolution to see if the results are similar. I tried a few times with the refine 2x function, but the wait was too long each time and the operation never got finished. I tried refining one component at a time, but Mecway adapted the mesh of the adjacent components to avoid cracks, so this was never going to work. I think this mesh adaptation must be an improvement over earlier versions, and a very handy one, though it did not help me on this occasion. I again tried doing the whole model at once, and amazingly it was done in well under an hour! I don't know what I did differently, possibly not selecting any elements as opposed to selecting all.

    So, I now had a model with 565238 quadratic elements and 2194687 nodes (565 238 and 2 194 687, if it helps), about 8x higher. Solving was seeming to take about an hour per iteration, so I left running overnight in the end. It solved fine (10 iterations) with about 0.03% difference in the peak temperature compared to the original model. Manipulation of the model was painful, taking between 5 and 10 seconds to respond to any manipulation or view change. A quick check of Task Manager indicated just over 4 GB of memory for Mecway (model open, with solution, not while solving).

    If anyone needs more precision on the solve times, I can run either again and take proper notes. Any other info, just ask.

    Computer: Dell XPS 15 laptop, Intel Core i7-4712HQ CPU 2.30 GHz, 16 GB RAM, GeForce GT 750M graphics.
  • I got an error shown in the attached file. It looks like that matrix array is larger than supported, which is strange, because I'm reading in this thread that over 2 million nodes are also possible.
    I made an analysis few weeks ago with about 150.000 elements and 500.000 nodes and it was solved with no problems quite fast in about 2 minutes. That's way I assumed there will be no problem with around double larger numbers.

    Does anyone know what is the reason for this error? I'm working in Mecway 13.1 and running static 3D analysis with Internal solver. I'm using mostly quad8 and tri6 elements, with some link2 elements.
  • Sorry, I meant line2 elements. Fingers were too fast...
  • The maximum number of nodes depends strongly on the connectivity of the mesh. I just solved a flat quad8 & tri6 shell mesh with 973000 nodes and it was OK. But if your model has a lot of T-connections or other ways that more elements share the same nodes, then it could get too big.
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!