Ivar KJELBERG
COMSOL Multiphysics(r) fan, retired, former "Senior Expert" at CSEM SA (CH)
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 17, 2010, 2:05 p.m. EDT
Hi
No, you cannot increase anything, Comsol will take what is available, so to solve (very) large models you need a 64 bit PC and dozen of GB of RAM, I can get models with about 200kDoF to run on my 2GB laptop, easily models of a few MDof on my 32Gb UX PC.
That is why it is so important to use symmetry, and 2D when possible, as 3D is really RAM consuming ;)
If you are just at the limit, there is the "server - client" mode of COMSOL (see the install manual) I managed to increase by about 25% the size of my models this way.
Have fun COMSOLing ;)
Ivar
Hi
No, you cannot increase anything, Comsol will take what is available, so to solve (very) large models you need a 64 bit PC and dozen of GB of RAM, I can get models with about 200kDoF to run on my 2GB laptop, easily models of a few MDof on my 32Gb UX PC.
That is why it is so important to use symmetry, and 2D when possible, as 3D is really RAM consuming ;)
If you are just at the limit, there is the "server - client" mode of COMSOL (see the install manual) I managed to increase by about 25% the size of my models this way.
Have fun COMSOLing ;)
Ivar
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 17, 2010, 4:08 p.m. EDT
Dear Ivar,
Thanks for your great support.
Thanks again.
Manjula
Dear Ivar,
Thanks for your great support.
Thanks again.
Manjula
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 29, 2010, 3:13 p.m. EDT
Hi,
In my case, the DOF about 60K and no of elements about 13k (with extreme coarse mesh). My machine is 32bit has 4 Gig RAM. Still it says "out of memory during LU decomposition".
So my question is: is there no way I can increase virtual memory?
Regards,
Susant
Hi,
In my case, the DOF about 60K and no of elements about 13k (with extreme coarse mesh). My machine is 32bit has 4 Gig RAM. Still it says "out of memory during LU decomposition".
So my question is: is there no way I can increase virtual memory?
Regards,
Susant
Ivar KJELBERG
COMSOL Multiphysics(r) fan, retired, former "Senior Expert" at CSEM SA (CH)
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 29, 2010, 7:44 p.m. EDT
Hi
That is already quite a lot, in structural I could quite much larger models with my 2G laptop. Have you tried to start comsol with the option "-np2" to use both cores, you probably have two on your PC, because in 32 bit, I'm not sure if a single core can use more than 2G. Then the server - client mode to reserve less for the graphics and to allow i to be better swapped.
(and I'm not sure importing the mesh will make it easier, as then COMSOL needs t analyse the geometry, and that is too RAM consuming)
but "LU decomposing" could also be related to badly conditionned matrices, hence some BC aspects I believe
Good luck
Ivar
Hi
That is already quite a lot, in structural I could quite much larger models with my 2G laptop. Have you tried to start comsol with the option "-np2" to use both cores, you probably have two on your PC, because in 32 bit, I'm not sure if a single core can use more than 2G. Then the server - client mode to reserve less for the graphics and to allow i to be better swapped.
(and I'm not sure importing the mesh will make it easier, as then COMSOL needs t analyse the geometry, and that is too RAM consuming)
but "LU decomposing" could also be related to badly conditionned matrices, hence some BC aspects I believe
Good luck
Ivar
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 29, 2010, 8:41 p.m. EDT
Hi,
How can I start comsol with -np2 option? When I am clicking on comsol icon to start, it does not asks about it.
Regards,
Susant
Hi,
How can I start comsol with -np2 option? When I am clicking on comsol icon to start, it does not asks about it.
Regards,
Susant
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 29, 2010, 10:03 p.m. EDT
Change the command setting in your icon. Start by right clicking on the icon, and left click properties.
Change the command setting in your icon. Start by right clicking on the icon, and left click properties.
Ivar KJELBERG
COMSOL Multiphysics(r) fan, retired, former "Senior Expert" at CSEM SA (CH)
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 30, 2010, 1:55 a.m. EDT
Hi
yes on a PC, edit the shortcut to the executable and add (I'm guessing for the exe name and path)
Target: "C:\...\comsol.exe -np2"
Better do it on a copy of your standard shortcut that you name Comsol np2, because once started in this mode, COMSOL does not leave much CPU power for your OS ;)
Have fun Comsoling
Ivar
Hi
yes on a PC, edit the shortcut to the executable and add (I'm guessing for the exe name and path)
Target: "C:\...\comsol.exe -np2"
Better do it on a copy of your standard shortcut that you name Comsol np2, because once started in this mode, COMSOL does not leave much CPU power for your OS ;)
Have fun Comsoling
Ivar
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 14, 2010, 8:08 a.m. EDT
Hi All,
I was just trying to run my the Comsol 3.5a on my PC with the -np2 command, when I got the error message:
"The name "C:\Program Files\COMSOL\...\comsol.exe -np2" specified in the target box is not valid. Make sure the path and file name are correct. "
Any ideas what I might be doing wrong? I simply copied the shortcut and tried to add the -np2 to the target text.
Thanks in advance,
Arda
Hi All,
I was just trying to run my the Comsol 3.5a on my PC with the -np2 command, when I got the error message:
"The name "C:\Program Files\COMSOL\...\comsol.exe -np2" specified in the target box is not valid. Make sure the path and file name are correct. "
Any ideas what I might be doing wrong? I simply copied the shortcut and tried to add the -np2 to the target text.
Thanks in advance,
Arda
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 14, 2010, 10:40 a.m. EDT
Perhaps it should be "-np 2" in the switch listing; i.e., add a space between "np" and "2".
Perhaps it should be "-np 2" in the switch listing; i.e., add a space between "np" and "2".
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 14, 2010, 10:47 a.m. EDT
Thanks for the quick reply, James, but unfortunately that didn't solve it. I actually tried different combinations like "-np 2" , "- n p 2", " - n p2" etc, but none of them was the right combination.
Any other ideas?
Thanks for the quick reply, James, but unfortunately that didn't solve it. I actually tried different combinations like "-np 2" , "- n p 2", " - n p2" etc, but none of them was the right combination.
Any other ideas?
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 14, 2010, 11:31 a.m. EDT
What operating system are you using ? How many processors/cores do you have ? Are you sure that the multiprocessing capability is working for you outside of COMSOL (in windows use task manager to view) ?
What operating system are you using ? How many processors/cores do you have ? Are you sure that the multiprocessing capability is working for you outside of COMSOL (in windows use task manager to view) ?
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 24, 2010, 1:44 p.m. EDT
Hello sir,
I also stumbled on a memory allocation problem; I am doing 3D dc conduction simulations, and i use a mesh with size varying between 35k and 100k elements. I use TAUCS solver since the matrices are symmetric, and no matter what it always returns me "out of memory during sparse matrix operations" when i go beyond 50k elements. it seems like it calculates the memory requirements before actually using the memory. and it never goes beyond 1.75MB (total) when the solver works...I thought it was because of the java heap space, that the applications allocates but that somehow does not appear in the task manager. How can we really turn on virtual memory use? it says a program can normaly use 2GB of virtual memory. It seems that the solver doesn't even try to use them. I saw the page in the help about going from 2 to 3 GB of virtual memory. But here my solver doesn't even use the 2! I heard about something "out of core" for PARDISO, can it be used for TAUCS? and how? with only 2GB of RAM it seems that my system is very limitated, but my processor is still quite fast and the program always run out of memory in 25sec max. I tried the iterative solvers but for some reason they never converge (the convergence parameter always goes down to 1e-3, sometimes 1e-4, but always explodes afterwards), I only saw them converge on the models from the model library!
And what about the UMFPACK solver? I heard it preallocates its memory, and for his part, it always runs out of memory too early (uses 200MB), or use very few memory when it succeeds(less than 100MB on one example). How could we force it to use all the memory available? I clearly see the preallocation on the task manager, and it is of 100MB! I tried to change the "memory allocation factor" in the solver parameters, but it didn't help. I also changed the "pivot treshold" value, which for a funny reason probably has the same value than the number of GB used (0.1), but it of course didn't change anything either. Could you help me?
thank you in advance for your help!
Jean-Pierre
Hello sir,
I also stumbled on a memory allocation problem; I am doing 3D dc conduction simulations, and i use a mesh with size varying between 35k and 100k elements. I use TAUCS solver since the matrices are symmetric, and no matter what it always returns me "out of memory during sparse matrix operations" when i go beyond 50k elements. it seems like it calculates the memory requirements before actually using the memory. and it never goes beyond 1.75MB (total) when the solver works...I thought it was because of the java heap space, that the applications allocates but that somehow does not appear in the task manager. How can we really turn on virtual memory use? it says a program can normaly use 2GB of virtual memory. It seems that the solver doesn't even try to use them. I saw the page in the help about going from 2 to 3 GB of virtual memory. But here my solver doesn't even use the 2! I heard about something "out of core" for PARDISO, can it be used for TAUCS? and how? with only 2GB of RAM it seems that my system is very limitated, but my processor is still quite fast and the program always run out of memory in 25sec max. I tried the iterative solvers but for some reason they never converge (the convergence parameter always goes down to 1e-3, sometimes 1e-4, but always explodes afterwards), I only saw them converge on the models from the model library!
And what about the UMFPACK solver? I heard it preallocates its memory, and for his part, it always runs out of memory too early (uses 200MB), or use very few memory when it succeeds(less than 100MB on one example). How could we force it to use all the memory available? I clearly see the preallocation on the task manager, and it is of 100MB! I tried to change the "memory allocation factor" in the solver parameters, but it didn't help. I also changed the "pivot treshold" value, which for a funny reason probably has the same value than the number of GB used (0.1), but it of course didn't change anything either. Could you help me?
thank you in advance for your help!
Jean-Pierre
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 24, 2010, 4:35 p.m. EDT
If you have a windows or linux operating system that is installed on a 32-bit processor, then COMSOL, or any other code for that matter, cannot utilize more than about 2GB per process (~ 2^31). This is a limitation of the hardware. It turns out that linux will actually achieve nearly this limit easier than windows will with proper tweaking. so, you may have a 32-bit machine with more than 2GB installed, but you will only get 2GB per process at a maximum.
On the other hand, if you have a 64-bit processor-based computer that has an operating system that takes advantage of 64-bit operating system (such as linux, Windows-XP64, etc.), then COMSOL , or any other code that is capable, can utilize as much memory as you can give it. I routinely run COMSOL on a 64GB amd64 machine running linux, and COMSOL will use all of this memory or more into the swap space if the problem is large enough.
So, if you use COMSOL on any problems of substantial size, I highly recommend a 64-bit computer with as much memory as you can afford to install in it. Furthermore, I additionally recommend Linux as an operating system for many more reasons beyond COMSOL.
If you have a windows or linux operating system that is installed on a 32-bit processor, then COMSOL, or any other code for that matter, cannot utilize more than about 2GB per process (~ 2^31). This is a limitation of the hardware. It turns out that linux will actually achieve nearly this limit easier than windows will with proper tweaking. so, you may have a 32-bit machine with more than 2GB installed, but you will only get 2GB per process at a maximum.
On the other hand, if you have a 64-bit processor-based computer that has an operating system that takes advantage of 64-bit operating system (such as linux, Windows-XP64, etc.), then COMSOL , or any other code that is capable, can utilize as much memory as you can give it. I routinely run COMSOL on a 64GB amd64 machine running linux, and COMSOL will use all of this memory or more into the swap space if the problem is large enough.
So, if you use COMSOL on any problems of substantial size, I highly recommend a 64-bit computer with as much memory as you can afford to install in it. Furthermore, I additionally recommend Linux as an operating system for many more reasons beyond COMSOL.
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 24, 2010, 6:21 p.m. EDT
Thank you, I didn't understand this at first, that's right. But I checked, the further the comsol process went on my machine was 1.2GB, never more. And the biggest problem is with UMFPACK, who barely preallocates 100MB of space and interrupts if it reaches this limit! I guess even on a 32-bit system I can get better than that...about getting a new machine (I only have a laptop right now!), I already thought of 64-bit systems, but not for that reason (it was about the more than 4GB of RAM only...but what you said is another very good reason!), and I take note of your remark about linux. right now, could you help me for UMFPACK and its 100MB maximum?
thank you very much for your help
Thank you, I didn't understand this at first, that's right. But I checked, the further the comsol process went on my machine was 1.2GB, never more. And the biggest problem is with UMFPACK, who barely preallocates 100MB of space and interrupts if it reaches this limit! I guess even on a 32-bit system I can get better than that...about getting a new machine (I only have a laptop right now!), I already thought of 64-bit systems, but not for that reason (it was about the more than 4GB of RAM only...but what you said is another very good reason!), and I take note of your remark about linux. right now, could you help me for UMFPACK and its 100MB maximum?
thank you very much for your help
Ivar KJELBERG
COMSOL Multiphysics(r) fan, retired, former "Senior Expert" at CSEM SA (CH)
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jun 25, 2010, 3:03 a.m. EDT
Hi
check the forum, there are a few discussion already about this. You have the server client mode that allows you to push perhaps 30% more out of you RAM. But if you are on a laptop, even with 2Gb or RAM you cannot do much. I also started that way, but since I have switched to a dial boot Win/linux 64 bit 48Gb RAM 12 duplex CPU core workstation I forget about these issues.
So before, then I run up to 1MdoF models on my 2Gb laptop with win-32, but I had to use Comsol in client - server mode, I had to stop all unecesary task and processes, and check that my graphics card did not block to much of the RAM as mostly RAM is shared on a laptop (CPU _ graphics)
But wit less RAM you become more clever in how to simplify your model, and to choose what is really required
Good luck
Ivar
Hi
check the forum, there are a few discussion already about this. You have the server client mode that allows you to push perhaps 30% more out of you RAM. But if you are on a laptop, even with 2Gb or RAM you cannot do much. I also started that way, but since I have switched to a dial boot Win/linux 64 bit 48Gb RAM 12 duplex CPU core workstation I forget about these issues.
So before, then I run up to 1MdoF models on my 2Gb laptop with win-32, but I had to use Comsol in client - server mode, I had to stop all unecesary task and processes, and check that my graphics card did not block to much of the RAM as mostly RAM is shared on a laptop (CPU _ graphics)
But wit less RAM you become more clever in how to simplify your model, and to choose what is really required
Good luck
Ivar
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jul 6, 2010, 5:37 a.m. EDT
Hello,
Thank you for all this. In fact I found out (actually I did not realize that at first), that my EM 3D problems involved symmetric positive definite matrices, so with TAUCS and multigrid I could do much more thant with all these direct solvers.
But now, I have to do some MHD, and even by solving equations separately, I need a time-dependant solver/preconditionner that, on one hand, does not care about the Navier-Stokes matrices unsymmetry, and on another hand, can be iterative. with the 3.3 version I only found GMRES+incomplete LU. EVERYTHING ELSE gets me trouble: all direct solvers of course run out of memory, and as I understood it, in time-dependant (that's the point!), I can foget about multigrid. And I think I see why: the mass coefficient for this damn pressure is always zero (I would like to say "I don't care about pressure, all I want is my velocity field!", but of course it needs this pressure to solve it anyway...). And also, TAUCS and CG are impossible to use. This leaves me what I said: GMRES+incomplete LU. But actually this combination does not work: already, with sometimes crooked volumic forces in 2D, its convergence is incredibly slow, almost reaching the default limit of 10000 iterations, and wth my 3D model, either it just blocks at a convergence parameter that is sometimes 50 someting, sometimes 0.14, either, if I refine the mesh to cover at least a little the boundary layers, it immediately runs out of memory.
And in all of this, for my Navier Stokes equation, I consider only one domain of my geometry, which always have less than 10000 nodes! and degrees of freedom are around 200k, sometimes less! how can even an iterative solver get stuck or run out of memory with that! and as I said, the volumic forces are fixed, because I compute the currents and magnetic field, then run separately for the velocity field, with EM variables kept constant. This is why I need a time-dependant analysis: I am going to script the resolution to use 2 different solvers: 1, stationnary, with multigrid, for the electric currents, then a time-dependant unsymmetric solver to solve for 0.1s of flow, then compute the electric currents in response to the change of speed and thus induction, and start again the NS flow computation...but as I said, I can't even complete the first step, because I couldn't even run a single 3D NS simulation with volumic forces! And I remember, that sometimes ago I saw a multigrid solver for Navier-Stokes, and it seems to me that it could handle time-dependance. But maybe all I saw on visualization was the progression of the stationnary convergence itself...
All what is left is to use stationnary multigrid to solve the coupled equations (NS+EM, in stationnary). But I doubt my system would have a stationnary solution...This is why I wanted to see the film!
and 1 last thing: I have one only laptop at my disposal, so I guess I can't hope to do something with the client/server scheme with one single computer...and I already swtch off everything before solving, when I close COMSOL the system only uses 350Mo for windows and other small components...and my graphic card has 256 MB proper memory, it should be sufficient! how can you stop the graphic card from using some of the RAM? and do you see the graphic card memory consumption on the task manager? because still, it looks like when GMRES runs out of memory, I still have 500MB free on my RAM...
Hello,
Thank you for all this. In fact I found out (actually I did not realize that at first), that my EM 3D problems involved symmetric positive definite matrices, so with TAUCS and multigrid I could do much more thant with all these direct solvers.
But now, I have to do some MHD, and even by solving equations separately, I need a time-dependant solver/preconditionner that, on one hand, does not care about the Navier-Stokes matrices unsymmetry, and on another hand, can be iterative. with the 3.3 version I only found GMRES+incomplete LU. EVERYTHING ELSE gets me trouble: all direct solvers of course run out of memory, and as I understood it, in time-dependant (that's the point!), I can foget about multigrid. And I think I see why: the mass coefficient for this damn pressure is always zero (I would like to say "I don't care about pressure, all I want is my velocity field!", but of course it needs this pressure to solve it anyway...). And also, TAUCS and CG are impossible to use. This leaves me what I said: GMRES+incomplete LU. But actually this combination does not work: already, with sometimes crooked volumic forces in 2D, its convergence is incredibly slow, almost reaching the default limit of 10000 iterations, and wth my 3D model, either it just blocks at a convergence parameter that is sometimes 50 someting, sometimes 0.14, either, if I refine the mesh to cover at least a little the boundary layers, it immediately runs out of memory.
And in all of this, for my Navier Stokes equation, I consider only one domain of my geometry, which always have less than 10000 nodes! and degrees of freedom are around 200k, sometimes less! how can even an iterative solver get stuck or run out of memory with that! and as I said, the volumic forces are fixed, because I compute the currents and magnetic field, then run separately for the velocity field, with EM variables kept constant. This is why I need a time-dependant analysis: I am going to script the resolution to use 2 different solvers: 1, stationnary, with multigrid, for the electric currents, then a time-dependant unsymmetric solver to solve for 0.1s of flow, then compute the electric currents in response to the change of speed and thus induction, and start again the NS flow computation...but as I said, I can't even complete the first step, because I couldn't even run a single 3D NS simulation with volumic forces! And I remember, that sometimes ago I saw a multigrid solver for Navier-Stokes, and it seems to me that it could handle time-dependance. But maybe all I saw on visualization was the progression of the stationnary convergence itself...
All what is left is to use stationnary multigrid to solve the coupled equations (NS+EM, in stationnary). But I doubt my system would have a stationnary solution...This is why I wanted to see the film!
and 1 last thing: I have one only laptop at my disposal, so I guess I can't hope to do something with the client/server scheme with one single computer...and I already swtch off everything before solving, when I close COMSOL the system only uses 350Mo for windows and other small components...and my graphic card has 256 MB proper memory, it should be sufficient! how can you stop the graphic card from using some of the RAM? and do you see the graphic card memory consumption on the task manager? because still, it looks like when GMRES runs out of memory, I still have 500MB free on my RAM...
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jul 6, 2010, 10:24 a.m. EDT
Welcome to the world of 3D NS, time-dependent + your other physics ! There are people and supercomputers here at ORNL that are researching how to improve on the solution of such problems to be able to run them on the world's largest supercomputers in faster time.
I really think you are probably getting the most you can out of your laptop windows system.
The method I use to solve 3D NS on COMSOL 3.5a is what is recommended by COMSOL for such problems:
1. segregated solver (stationary or time-dependent, although stationary is reasonable since the time-dependent essentially repeats the stationary solver for each time step)
2. split the NS portion into {u,v,w,p}, and {logk, logd}
3. use GMRES or FGMRES iterative solver
4. use GMG as the preconditioner; typically at least 3 manually-created fixed meshes. The coarsest mesh must fit into the memory of your machine with a direct solver (test this first to make sure). The coarsest mesh must do a reasonable job of capturing the physics (i.e., at least a point or two into the boundary layer mesh). You want the finest mesh portion of the GMG to also fit into the memory of your machine using GMRES/GMG. If it starts to page into virtual memory, it will be so slow you can't wait for it, and it will wear out your hard drive prematurely (thrashing the drive continuously).
5. There are tricks setting up your fixed meshes. I typically use linear elements for all. The method breaks down if you try to go from fine to coarsest mesh while increasing element order. So, you can start for example with quadratic mesh, then down to linear to decrease the order of the mesh, etc. To be safe, I just use all linear elements and refine my mesh accordingly.
6. Use SOR, 5 iterations, 0.8 damping for both pre-smoothing and post-smoothing settings.
7. Use PARDISO for the direct solver of the coarsest mesh.
8. and finally, I also impose manual scaling in the advanced settings tab. I use a nominal setting for the maximum expected value for each variable for the scaling factor. I found this to dramatically improve convergence over other scaling options.
All this tweaking is worth it because it should reduce the solution time down to about 1-2 weeks on an 8-core, high-memory, compute node with 3.5a for a typical problem of about 5 Mdof.
I anticipate COMSOL v4.0a to improve on this situation. I look forward to seeing what size problem I can solve with the new parallel processing capability.
Hope this give you a feel for what is required.
Welcome to the world of 3D NS, time-dependent + your other physics ! There are people and supercomputers here at ORNL that are researching how to improve on the solution of such problems to be able to run them on the world's largest supercomputers in faster time.
I really think you are probably getting the most you can out of your laptop windows system.
The method I use to solve 3D NS on COMSOL 3.5a is what is recommended by COMSOL for such problems:
1. segregated solver (stationary or time-dependent, although stationary is reasonable since the time-dependent essentially repeats the stationary solver for each time step)
2. split the NS portion into {u,v,w,p}, and {logk, logd}
3. use GMRES or FGMRES iterative solver
4. use GMG as the preconditioner; typically at least 3 manually-created fixed meshes. The coarsest mesh must fit into the memory of your machine with a direct solver (test this first to make sure). The coarsest mesh must do a reasonable job of capturing the physics (i.e., at least a point or two into the boundary layer mesh). You want the finest mesh portion of the GMG to also fit into the memory of your machine using GMRES/GMG. If it starts to page into virtual memory, it will be so slow you can't wait for it, and it will wear out your hard drive prematurely (thrashing the drive continuously).
5. There are tricks setting up your fixed meshes. I typically use linear elements for all. The method breaks down if you try to go from fine to coarsest mesh while increasing element order. So, you can start for example with quadratic mesh, then down to linear to decrease the order of the mesh, etc. To be safe, I just use all linear elements and refine my mesh accordingly.
6. Use SOR, 5 iterations, 0.8 damping for both pre-smoothing and post-smoothing settings.
7. Use PARDISO for the direct solver of the coarsest mesh.
8. and finally, I also impose manual scaling in the advanced settings tab. I use a nominal setting for the maximum expected value for each variable for the scaling factor. I found this to dramatically improve convergence over other scaling options.
All this tweaking is worth it because it should reduce the solution time down to about 1-2 weeks on an 8-core, high-memory, compute node with 3.5a for a typical problem of about 5 Mdof.
I anticipate COMSOL v4.0a to improve on this situation. I look forward to seeing what size problem I can solve with the new parallel processing capability.
Hope this give you a feel for what is required.
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Jul 19, 2010, 11:49 a.m. EDT
waou! 2 weeks!!...if only I had a way of keeping something run that long and take advantage of it...but no, either it runs max 20 min and do not converge, either it runs out of memory. Always. You are right, right now I admitted I couldn't get anything of my machine for NS WITH BOUNDARY LAYER. Because without it, it converges, and i indeed can see something! of course it is so much easier...but you know, even with GMRES and no multigrid, the finest grid I can have can pass max. 8 points from one wall of my cylinder to another (coaxial geometry), and the boundary layer may only be one tenth of this distance, probably much less...my first point after the boundary may be 3 times further than the end of the boundary layer! and maybe even more, although I managed to refine it only around the points where the volumic force was the strongest...I even tried to reduce my geometry size by 10 times, but no way, it never converges...maybe I will try your manual scaling, as on the z-coordinate I have almost no speed (but that was the point, I wanted to see how it deviates from the normal zero, as the volumic forces are localised!), but I think it is pointless. But is it wrong, to make very flat tetrahedrons, near the boundary? I mean flat, so that you won't need 1M elements to cover the surface itself with triangles the same size that the half of your boundary layer...I would like triangles over the cylinder, quite large, but the tetrahedron's height, on the coordinate "r", a lot smaller, to fall closer to the wall than the end of the boundary layer. I can force that by adding an intermediate cylinder, very close to the wall, and only used for meshing. But this way, it would say "low quality elements", and the solver won't like it. But the physical phenomenons indeed depend only a few on theta and z, and a lot on r, near the outer wall of a cylinder! which means, the refinement on the mesh should be able to be poorer on theta and z, without loss of accuracy or failure to keep up with the scale of the changes of the quantities accros space!
In general I find that comsol does not facilitate cylindrical geometries, where physical quantities are NOT axisymetric. Here my flow flows in a perfect coaxial cylinder, but the forces that are applied to it are not axisymmetric. and I did not find how to do a quadrangle mesh, based on the coordinates r, theta and z; this would not cause any problem, as my cylinder is coaxial and no domain can reach r=0, or even approach it! With this, I guess the computation would be a lot more stable, and maybe, if we could specify different steps on r than on theta and z, like we can with the advanced mesh parameters for x,y and z, then there would be a point on the boundary, in front of my point inside the fluid, but close to the boundary. It would not be an interpolation based on three distant points, like it would be the case with "flat" pyramids". I know my reasonment sounds a little like finite differences, but I guess such kind of arguments is the reason why flat tetrahedrons are said to be of "poor quality"...
And another thing: why doesn't comsol integrates in its 2D axisymmetric NS mode, a possibility to have a flow with V along v_theta only? only azimuthal velocity. in most cases this approximation is correct, and simplifies A LOT the problem in terms of 3D->2D. I mean, even in 3D, you can solve for u and v only, when you know your flow has great chances to be on x and y only (although different planes x-y friction with each other, always in a different way, which means it is not simple 2D) but you cannot solve only for "u_theta", can you? this would help a lot...and allow apparently difficult problems to be solved approximately even on small machines, by reducing partly the complexity! not 2D, but not full 3D NS either.
thanks for your answer anyway, it made me understand on what ground I am playing with this kind of problem...
waou! 2 weeks!!...if only I had a way of keeping something run that long and take advantage of it...but no, either it runs max 20 min and do not converge, either it runs out of memory. Always. You are right, right now I admitted I couldn't get anything of my machine for NS WITH BOUNDARY LAYER. Because without it, it converges, and i indeed can see something! of course it is so much easier...but you know, even with GMRES and no multigrid, the finest grid I can have can pass max. 8 points from one wall of my cylinder to another (coaxial geometry), and the boundary layer may only be one tenth of this distance, probably much less...my first point after the boundary may be 3 times further than the end of the boundary layer! and maybe even more, although I managed to refine it only around the points where the volumic force was the strongest...I even tried to reduce my geometry size by 10 times, but no way, it never converges...maybe I will try your manual scaling, as on the z-coordinate I have almost no speed (but that was the point, I wanted to see how it deviates from the normal zero, as the volumic forces are localised!), but I think it is pointless. But is it wrong, to make very flat tetrahedrons, near the boundary? I mean flat, so that you won't need 1M elements to cover the surface itself with triangles the same size that the half of your boundary layer...I would like triangles over the cylinder, quite large, but the tetrahedron's height, on the coordinate "r", a lot smaller, to fall closer to the wall than the end of the boundary layer. I can force that by adding an intermediate cylinder, very close to the wall, and only used for meshing. But this way, it would say "low quality elements", and the solver won't like it. But the physical phenomenons indeed depend only a few on theta and z, and a lot on r, near the outer wall of a cylinder! which means, the refinement on the mesh should be able to be poorer on theta and z, without loss of accuracy or failure to keep up with the scale of the changes of the quantities accros space!
In general I find that comsol does not facilitate cylindrical geometries, where physical quantities are NOT axisymetric. Here my flow flows in a perfect coaxial cylinder, but the forces that are applied to it are not axisymmetric. and I did not find how to do a quadrangle mesh, based on the coordinates r, theta and z; this would not cause any problem, as my cylinder is coaxial and no domain can reach r=0, or even approach it! With this, I guess the computation would be a lot more stable, and maybe, if we could specify different steps on r than on theta and z, like we can with the advanced mesh parameters for x,y and z, then there would be a point on the boundary, in front of my point inside the fluid, but close to the boundary. It would not be an interpolation based on three distant points, like it would be the case with "flat" pyramids". I know my reasonment sounds a little like finite differences, but I guess such kind of arguments is the reason why flat tetrahedrons are said to be of "poor quality"...
And another thing: why doesn't comsol integrates in its 2D axisymmetric NS mode, a possibility to have a flow with V along v_theta only? only azimuthal velocity. in most cases this approximation is correct, and simplifies A LOT the problem in terms of 3D->2D. I mean, even in 3D, you can solve for u and v only, when you know your flow has great chances to be on x and y only (although different planes x-y friction with each other, always in a different way, which means it is not simple 2D) but you cannot solve only for "u_theta", can you? this would help a lot...and allow apparently difficult problems to be solved approximately even on small machines, by reducing partly the complexity! not 2D, but not full 3D NS either.
thanks for your answer anyway, it made me understand on what ground I am playing with this kind of problem...
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Oct 5, 2010, 1:29 a.m. EDT
hello James or whom it may concern,
i currently use comsol40a for a large magnetic field problems. most of my problems are large thin ferrous structures in air with about 28mil DOF using 64bit 64GB RAM 8-core computer. i use the default solver and it took 50 hours to solve. i tried solving another problem about 40mil DOF but out of memory message...
so how has your experience been with comsol40a? would you say the situation improved with the parallel processing, any suggestion?
please how do i take advantage of this parallel processing capability?
--
merci
hello James or whom it may concern,
i currently use comsol40a for a large magnetic field problems. most of my problems are large thin ferrous structures in air with about 28mil DOF using 64bit 64GB RAM 8-core computer. i use the default solver and it took 50 hours to solve. i tried solving another problem about 40mil DOF but out of memory message...
so how has your experience been with comsol40a? would you say the situation improved with the parallel processing, any suggestion?
please how do i take advantage of this parallel processing capability?
--
merci
Ivar KJELBERG
COMSOL Multiphysics(r) fan, retired, former "Senior Expert" at CSEM SA (CH)
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Oct 5, 2010, 2:25 a.m. EDT
Hi
I did not really catch your point that xou cannot make advanced mesh in 2D axi cases ported to 3D ? for me with the "revovled mesh" features you can make nice regular meshes. True you must define a workplane and reconstruct the projection by hand (you you havent done it in your CAD).
--
Good luck
Ivar
Hi
I did not really catch your point that xou cannot make advanced mesh in 2D axi cases ported to 3D ? for me with the "revovled mesh" features you can make nice regular meshes. True you must define a workplane and reconstruct the projection by hand (you you havent done it in your CAD).
--
Good luck
Ivar
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Oct 5, 2010, 9:49 a.m. EDT
You are using the shared-memory form of parallel processing (PP) right now. With this time of PP, you get the best performance, but still require the same amount of memory to run a problem (actually, a little more memory than a single core running). If you obtain a cluster of computers (normally purchased as a cluster from a vendor), then you gain improved performance AND reduced memory requirements by using distributed parallel processing (DPP). So, in your case, you can add even more memory to your single-node (multiple cores), you may also be able to add additional cores to your motherboard, or you may be interested in a cluster. I have a paper in the conference this week which touches on these topics and shows some results of a cluster on a NS problem.
You are using the shared-memory form of parallel processing (PP) right now. With this time of PP, you get the best performance, but still require the same amount of memory to run a problem (actually, a little more memory than a single core running). If you obtain a cluster of computers (normally purchased as a cluster from a vendor), then you gain improved performance AND reduced memory requirements by using distributed parallel processing (DPP). So, in your case, you can add even more memory to your single-node (multiple cores), you may also be able to add additional cores to your motherboard, or you may be interested in a cluster. I have a paper in the conference this week which touches on these topics and shows some results of a cluster on a NS problem.
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Oct 5, 2010, 9:59 a.m. EDT
James Freels,
how do i get a copy of your paper? can you send it? i could not find it. i appreciate your reponse.
--
merci
James Freels,
how do i get a copy of your paper? can you send it? i could not find it. i appreciate your reponse.
--
merci
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Oct 5, 2010, 10:24 a.m. EDT
The conference is this week, so it has not been published yet. You can view the abstract which does have the two key figures on parallel processing in it:
www.comsol.com/conference2010/usa/abstract/id/7970/freels_abstract.pdf
Look for the conference proceedings on a CD published later.
The conference is this week, so it has not been published yet. You can view the abstract which does have the two key figures on parallel processing in it:
http://www.comsol.com/conference2010/usa/abstract/id/7970/freels_abstract.pdf
Look for the conference proceedings on a CD published later.
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Nov 2, 2010, 5:04 a.m. EDT
Sorry for stepping into this topic but I've got question related to memory allocation.
I also do some simple 3D models and on my laptop with 64bit ubuntu with 4GB of RAM and additional swap space I can solve my model in reasonable amount of time (~300sec). However I need to perform calculations for large number of frequencies (RF module). I use comsol 3.5.
When I run parametric sweep I see that with each step comsol requires more memory - after 7-10 steps calculation takes too long (up to 2000 sec) and usage of swap space increases.
When I close the comsol and run it once again - everything is back to normal - my model with ~1M DOF fits into the RAM and I can solve it in again 300 secs.
Is there anything I can do or check about that?
At first I considered running comsol in batch mode from the external shell script - solving each model separately (I believe it will close after solving and free up the memory just like in by hand solution). Unfortunately I can't do batch *.mph processing in v3.5.
Sorry for stepping into this topic but I've got question related to memory allocation.
I also do some simple 3D models and on my laptop with 64bit ubuntu with 4GB of RAM and additional swap space I can solve my model in reasonable amount of time (~300sec). However I need to perform calculations for large number of frequencies (RF module). I use comsol 3.5.
When I run parametric sweep I see that with each step comsol requires more memory - after 7-10 steps calculation takes too long (up to 2000 sec) and usage of swap space increases.
When I close the comsol and run it once again - everything is back to normal - my model with ~1M DOF fits into the RAM and I can solve it in again 300 secs.
Is there anything I can do or check about that?
At first I considered running comsol in batch mode from the external shell script - solving each model separately (I believe it will close after solving and free up the memory just like in by hand solution). Unfortunately I can't do batch *.mph processing in v3.5.
Jim Freels
mechanical side of nuclear engineering, multiphysics analysis, COMSOL specialist
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Nov 2, 2010, 9:59 a.m. EDT
I think that the parametric sweep will save the solution for each specified parameter value. That way you can go back and plot the results as a function of the parameter changes. This is why your available memory is reducing as the sweep progresses. There is a check box in the advanced solver tab that allows you to "save the solution to file". This might help.
Another thing you can do is look into iterative methods if you are using a direct solver. It sounds like you probably are if you can solve a 3D problem in 300 s. Iterative methods will use less memory, but take longer to solve. Also, v4.1 has some exciting new solvers (projection methods) for 3D Navier Stokes that reduce the iterations to one variable at a time.
Otherwise, you may want to consider more memory. Since you have a 64-bit machine, 4GB is actually quite low for 3D problems. I started with a 16GB machine, and now we have progressed to 64GB as a "standard" size for our 3D problems. Memory is pretty cheap now.
Good luck ...
I think that the parametric sweep will save the solution for each specified parameter value. That way you can go back and plot the results as a function of the parameter changes. This is why your available memory is reducing as the sweep progresses. There is a check box in the advanced solver tab that allows you to "save the solution to file". This might help.
Another thing you can do is look into iterative methods if you are using a direct solver. It sounds like you probably are if you can solve a 3D problem in 300 s. Iterative methods will use less memory, but take longer to solve. Also, v4.1 has some exciting new solvers (projection methods) for 3D Navier Stokes that reduce the iterations to one variable at a time.
Otherwise, you may want to consider more memory. Since you have a 64-bit machine, 4GB is actually quite low for 3D problems. I started with a 16GB machine, and now we have progressed to 64GB as a "standard" size for our 3D problems. Memory is pretty cheap now.
Good luck ...
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Nov 4, 2010, 4:03 p.m. EDT
Unfortunately, checking "save solution to file" didn't change anything.
Frequency sweeps for 100 frequencies are out of my reach with this issue.
I have to sit and click "open/solve/close" repeatedly :(
Unfortunately, checking "save solution to file" didn't change anything.
Frequency sweeps for 100 frequencies are out of my reach with this issue.
I have to sit and click "open/solve/close" repeatedly :(
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 16, 2011, 8:11 a.m. EDT
Unfortunately, checking "save solution to file" didn't change anything.
Frequency sweeps for 100 frequencies are out of my reach with this issue.
I have to sit and click "open/solve/close" repeatedly :(
HI,
I am coming across the same problem when doing the parametric sweep.
The memory usage increase with the parametric solver going on.
Have you resolved your problem? And how did you fixed it?
Any advices will be helpful,
Thanks
Best regards
Yulong Yang
[QUOTE]
Unfortunately, checking "save solution to file" didn't change anything.
Frequency sweeps for 100 frequencies are out of my reach with this issue.
I have to sit and click "open/solve/close" repeatedly :(
[/QUOTE]
HI,
I am coming across the same problem when doing the parametric sweep.
The memory usage increase with the parametric solver going on.
Have you resolved your problem? And how did you fixed it?
Any advices will be helpful,
Thanks
Best regards
Yulong Yang
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 18, 2011, 7:18 p.m. EDT
I just got done with a huge calculation involving a thousand parameter values. I do have 'Store solution out-of-core' checked. That probably is why I don't get a memory error message. If that doesn't work I don't know if I can help, though I will keep my eyes open.
I just got done with a huge calculation involving a thousand parameter values. I do have 'Store solution out-of-core' checked. That probably is why I don't get a memory error message. If that doesn't work I don't know if I can help, though I will keep my eyes open.
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Apr 20, 2011, 3:17 a.m. EDT
Thanks for your reply.
I have got through the calculation by "store out of core", and really thanks to your suggestion.
But how can you deal with so many solution files, especially when you want to extract values from all the solutions?
I have about 100 parametric solutions generated by parametric sweeping. Each solution contain some steps belonging to a time transient process. with so many sub-solutions, I don't know how to do post-process .
Thanks for your reply.
I have got through the calculation by "store out of core", and really thanks to your suggestion.
But how can you deal with so many solution files, especially when you want to extract values from all the solutions?
I have about 100 parametric solutions generated by parametric sweeping. Each solution contain some steps belonging to a time transient process. with so many sub-solutions, I don't know how to do post-process .
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
Aug 27, 2014, 1:42 p.m. EDT
Hello
How would one run a client server on the same computer?
Where do I find the information for what host and port for this application?
Thanks very much
Hello
How would one run a client server on the same computer?
Where do I find the information for what host and port for this application?
Thanks very much