How is it with RAM reserved for caching vs RAM needed for computing?
If I give too much RAM for caching would it limit the performance of computing the sim?
If I am interested in only having particles saved to disk, should I minimize the cache to get better performance?
Once again, it was broken in the last build. Did you read my other answer? I posted an updated build just for you where the limit works better, and we are working on a more dynamic memory manager for the build after that.
Yes, I read it. But the question is slightly different
I answered there, but I will do it again here:
*If you have a lot of RAM, I would suggest reserving as much as you can for the Memory Cache. In the build I posted, the writing buffer is stuck at 512MB (which is a bug), so you cannot change the speed of saving to PRTs, but you can speed up the simulation itself. Since the two processes (sim and saving) are asynchronous now (two independent threads), you can dump all your particles to memory and then continue using Max and your computer to do other things while the background thread writes out all PRTs.
*Hopefully in the next build, you will simply give Stoke the amount of memory you know you have, and it will balance the sizes of the two buffers as needed to give you fastest simulation AND fastest saving. (Someday in the future we might even allow multi-threaded saving for SSD and Fusion-io drives since the majority of the saving is single-threaded ZIP compression).
So if you have 16GB of RAM and 4GB are already used by Max and Windows etc, give your Stoke 10GB and see how it goes… Reducing the size of the memory cache is a bad idea, because then the simulation will become coupled with the saving (there will be not enough buffer to store the simulation results and the sim thread will have to wait for the write thread to advance to pass the particles from the one buffer to the other).
That’s clear now. Thanks if I missed something there.