when i render a prt hair frame with lots of particles, it takes about as least as much time to get to viewport again, as it did to render,
im assuming this is freeing up the memory afterwars?
can we make this an option that it will stay in memory so not have to retrieve particles again? just change options etc.?
Nope, what you are seeing is something completely different, and we have some ideas how to deal with it.
In short, the Max Hair&Fur SDK is so poorly written that it doesn’t even provide a method for getting the splines. What it does have (and it is even exposed to MAXScript) is the option to dump all splines to a file in some special format. So when you render or display particles in the viewport, a temp. file is written to disk, then read back and turned into particles.
When Max goes to render a frame, it has to discard the particle info used for the viewport and switch to “render mode”. So it dump the hairs to disk, then reads them and creates new render-time particles. When the rendering finishes, it switches to viewport mode and currently simply repeats the same process, dumping hairs to a file, reading and creating viewport particles. The particle creation is actually pretty fast (and will be even faster in future builds), but the writing of the file is what is slow and what we have no control over.
What we might do in the future is create multiple numbered files on disk, one per frame, and use them as cache. If the hair does not change, we could simply keep on reading existing cache files from disk and the more you move around the time line, the more files you would have cached for faster access. So when rendering finishes, the viewport should not wait for new splines to be dumped to disk, but just load the last file and build new particles immediately.
The actual particle loading will be multi-threaded (right now it isn’t), so it will be nearly instantaneous.
We had similar issues with Frost returning from rendering and solved them by caching the viewport mesh between render and viewport switching. We know it is suboptimal right now and will try to make it better.