Hello,
I’ve observed that the path remapping on large scene files can take quite a long time. In case of a 3GB .vrscene file I’ve seen times of above 10 minutes and the bigger the file gets, the path-remapping time seems to scale non-linearly, meaning it gets not worse but WAY worse with increasing file size. Now I know that as a user I should keep that file size rather small by exporting alembics or V-Ray proxies. But I’m assuming that you guys use your own code to crawl through these scene files (I’m sure it’s multi threaded and highly optimized too ). Yet V-Ray Standalone now has it’s own path remapping features, I’m not sure when they have implemented it but I’m guessing with the release of “NEXT”.
I’ve tested and Benchmark a bit and found out that it seems like V-Ray standalone now does the path-remapping “on the fly” rather than crawling through the scene first, then switching the paths around and then start the rendering. So the way it’s currently done creates a bit of redundancy.
Would it be possible for you guys to implement 2 ways of path mapping within the V-Ray-standalone plugin and the submitter? One that uses deadlines path remapping and one that uses the V-Ray command line one? In a way that the path remapping rules that I use within deadline will be passed to V-Ray directly instead of crawling through the .vrscene first?
Why both ways? Well because Deadlines path remapping features work very robust and since they are there, why not keep them exposed? I’d suggest that in the submitter, there should be a new checkbox that states “use V-Ray path mapping (faster)” or something like that. So if that “faster” way fails for some buggy reason, we can still use the “old” way.
What do you think?
-Robert