We use cookies and similar tools to enhance your experience, provide our services, deliver relevant advertising, and make improvements. Approved third parties also use these tools to help us deliver advertising and provide certain site features.
Customize cookie preferences
We use cookies and similar tools (collectively, "cookies") for the following purposes.
Essential
Essential cookies are necessary to provide our site and services and cannot be deactivated. They are usually set in response to your actions on the site, such as setting your privacy preferences, signing in, or filling in forms.
Performance
Performance cookies provide anonymous statistics about how customers navigate our site so we can improve site experience and performance. Approved third parties may perform analytics on our behalf, but they cannot use the data for their own purposes.
Allowed
Functional
Functional cookies help us provide useful site features, remember your preferences, and display relevant content. Approved third parties may set these cookies to provide certain site features. If you do not allow these cookies, then some or all of these services may not function properly.
Allowed
Advertising
Advertising cookies may be set through our site by us or our advertising partners and help us deliver relevant marketing content. If you do not allow these cookies, you will experience less relevant advertising.
Allowed
Blocking some types of cookies may impact your experience of our sites. You may review and change your choices at any time by clicking Cookie preferences in the footer of this site. We and selected third-parties use cookies or similar technologies as specified in the AWS Cookie Notice.
While sending last job, i got a “bad” surprise. My fastest computer is the slowest while partitioning pflow particles, because it’s only using one cpu. Is this correct or am i doing something wrong while submiting job via deadline?
If Particle Flow is involved, only one thread will be used. When submitting Partitioning jobs, you have the ability to enable Concurrent Tasks, as long as you have enough memory.
This will cause multiple instances of 3ds Max to be launched on the same machine, each one using more or less one Core. Since neither Max nor Krakatoa require additional licenses to run on the same machine, you can have up to 16 tasks (partitions) processed on a single render node. (but I would suggest starting with 4 and adding more if it seems to help).
Was the actual partitioning time longer than on the other machines? It appears that the 24 core machine is clocked lower than the rest, so it is possible.
But the whole point of Partitioning is to distribute the load to multiple processes and calculate data in parallel, so using the Concurrent Tasks option would be a good idea.
You can enable Concurrent Tasks in the Job Properties of an already submitted job and play with the settings…
Indeed, the 24 core machine is clocked lower.
In terms of concurrent tasks, gonna give it a try! One quick about this… the limit tasks to slave’s task limit checkerbox auto detects the memory limit usage per machine? Because I have 32Gb Ram on one machine and 16 on the others…
Manual info: “Limit Tasks To Slave’s Task Limit: If checked, a slave will not dequeue more tasks than it is allowed to based on its settings” - Based on what settings? RAM memory? I mean, it’s auto? If it’s auto, then we can always use a big number of tasks and let him limit automatically per slave?
Each Slave has a property called “Concurrent Task Limit Override”. It defaults to 0, in which case the number of Cores is used, unless it is higher than 16, in which case the hard-coded maximum of 16 will be used.
Switch to Super User mode, select a Slave, right-click and select “Modify Slave Properties” (or Ctrl+P) and you will find it.
There is nothing dealing with memory usage, because there is no way to know how much memory a task is actually going to use.
That being said, Krakatoa itself does not use any memory when saving PRTs (or Partitioning). The memory is used only by Max and PFlow.