AWS Thinkbox Discussion Forums

Nonlinear slowdown

1.5.1.38002

I’ve got a real stinker of a render going on…

It’s 2 PRT Loaders, total of 132M particles per frame, before any culling or whatnot.

Without lighting, the first PRT Loader renders in 3 minutes.
Without lighting, the second PRT Loader renders in 7 minutes.
Without lighting, together they render in 154 minutes.

Total memory consumption is never more than abut 4GB, and I have 8GB installed, so that shouldn’t be the issue.

Any guesses why the “Retrieving Paricles” would take so long?

  • Chad

Ouch!

Have you tried looking at the Log Window to see where the time goes? Is it slow while loading, or slow while sorting, or slow while drawing?
There should be some profiling data about how long it took to load and perform all other stages. It would be useful to know what portion of the process might be causing this…

Does it show the same problem if you load 1% of the particles with First N (thus speeding up the testing while still causing a slow down when loading both)?

It’s slow only when “Retrieving” The rest is very fast.

I’ll look at logs and try the 1% idea now.

  • Chad

Loading is somewhat multi-threaded, I somehow suspect there is some issue with threading.
We had something like that with animated noise maps in older builds where we were getting some thread locking in PFlow.

Are you applying any materials/maps to the PRT Loaders? If you are, try removing them and see if anything changes.
Also if you are culling inside the PRT Loaders, try not to and see if that changes anything.
The loading process includes shading, deforming, KCMs and culling, so try to exclude each one of these if you have them on to localize the problem…

yes, i’m doing culling, deformation, kcm’s, and materials. I’ll try reducing the variables monday.

LOL, linear, isn’t that beyond exponential! kidding, that is a huge difference. Out of my own curiosity, does it by any chance have different channels saved in each data set?

Non-linear == Exponential. To error there.
I don’t think different channels would cause any problem, they are always initialized to defaults if needed but missing.
It really sounds like something is going wrong with the multi-threading.

Setting the % lower fixes the issue. So PRT Loader 1 takes 1 sec, PRT Loader 2 takes 1.5 sec, and together they take 3 sec. So it’s only affecting higher counts. I’ll do the other tests at 100%.

EDIT: Actually, 50% seems nice. 1.5 min, 3 min, and 13.5 min. So you see it getting worse, but don’t have to wait terribly long. So going from 100% to 50% cuts the time in half for the separate loaders, and by more than 90% on the combined render. So it seems to get worse as counts increase.

Currently seems to be the culling. Everything else on, except culling, and the combined time is nearly the sum of the components.

Thanks, this is quite helpful.
Any chance to pre-cull each stream so you don’t have to do it on the fly?

Keep in mind that culling is moving to the stack in the next update (via geometry inputs and operators in MagmaFlow), so we might not have to deal with fixing the PRT Loader’s implementation.

Yes, a valid workaround (except that the shot delivered 5 days ago). I was just trying to find the issue for debugging to save future headaches.

Privacy | Site terms | Cookie preferences