Krakatoa and Deadline

Today I tried to use Krakatoa with Free Deadline to save partitions on 2 maschines at the same time.

well, it`s not working.



Is it possible to get a network license or a second one?



I made a small test, using krakatoa for the first time, the times are ugly for saving particles on an single maschine.

PFlow is using only one core…



It would be helpful to know what exactly is not working. What did you do, what did you expect, what was the outcome?



Please post step-by-step reports when something is not working - we cannot read your mind ;o)



Did Krakatoa crash? Did Deadline fail to save partitions? Or did it fail to run the job at all?



Btw, there is a known problem when saving particles from PFlow containing FumeFX operators. We expect an upcoming FumeFX point update to fix this.





Thanks in advance,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

Ok, I think I can see what you mean (I have a free installation of Deadline at home for testing).



Thanks for the report!



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

everything is working fine, on a single mashine.

what I mean, Iv got a nodlocked license and can not “render” across the network.

I ask for a second license to run krakatoa with deadline.

Got it, thanks!



We will have to discuss this internally - our plan for the release version is to ship a significant number of network licenses for Deadline processing with each workstation license of Krakatoa. I have no idea what the policy is right now during the beta test, but I guess it should be possible to cut you another license for testing with Deadline…



Let’s talk tomorrow when everybody is in the office again.



Cheers,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

thanks, that would be great!



I will buy Deadline tomorrow for rendering Maya and RealFlow es well.

Shame on me, I have never heard about deadline bevor, or I even heard about, but ignored this great tool…



regards,



Bandu

hey,



is there any progress on geting a license of krakatoa to render with deadline over network?

[edit] question out of time :slight_smile: thx [edit]


by the way...

Iv got 4 core CPU.
PFlow is using only one core to solve the particles.
Is it posible to render 4 jobs on one machine at the same time with deadline?
smedge, my old network render utility lets users to run as many jobs as cpus on a single machine.

thx

regards,
Bandu

Yep, we talked about it yesterday.

Please send an email to David Marks with a request for a Network License.



Cheers,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

I like the idea about multiple jobs running on one machine. If Deadline could let you run 4 partition generating jobs on a 4 core machine, that would be great.



Even if you had less memory per task, 4 partitions of 500K particles would be much faster than 1 partition of 2M.


  • Chad

I added the necessary controls to the Deadline Submitter and to the KrakatoaGUI so you can specify this at submission time.



In the mean time as a workaround, in Deadline 2.6 you can select your partioning job > Properties > Advanced tab > Concurrent Tasks > 4. (Limit to number of CPUs is checked by default, so if you have machines with 2 CPUs, they will pick up 2 tasks at a time instead of 4).



This feature has been there since Deadline 1.0, but for Max jobs we enabled it just recently (I believe it was in Deadline 2.5)




Cheers,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

I’v tried already this work around but it is not working on Deadline For Free.

I think there is not only limitation for 2 slaves but alsow for two render jobs…



Bandu

waiting for purchase instructions…:slight_smile:

In the Tools menu of the Monitor, in SuperUser mode, go to Configure Plugins, select 3dsmax 8 or 9 and switch FailOnExistingMaxProcess to False.



This is currently a bug - when concurrent tasks is set to higher than 1, the check should not be performed, or it should keep track of the number of tasks/max copies running and only fail if more than the requested/allowed number are running.



As it is by default, the second Max copy would fail to launch if another Max copy is already running. We are using this to: A) automatically disable network rendering on workstations if a user launches a copy of Max to work on the machine and B) avoid getting multiple dead copies of Max sitting in memory with another task trying to launch.



I am pretty sure you can turn off the FailOnExistingMaxProcess without any side effects on a small installation like yours.





Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

Is this bug something that can be fixed? I don’t want to mess up our regular renders with a special setting just for partitioning.



Also, wouldn’t Krakatoa have to submit the job differently? I thought the concurrent tasks were just different tasks in the same job. You can’t assign 4 partition jobs to the same machine, can you? We’d need to have the tasks broken up so that partition 1, frame 1, was executed on the same machine as partition 1, frame 2, but have partition 2, frames 1 and 2 done on a different process.



Am I making sense? It does no good to have 1 partition of 100 frames executed on 4 processes if each process has to do the runnup for the 3/4 of the frames they are NOT saving to disk.


  • Chad

Thank You Bobo,



I have been thru every single chekbox and menu in dedline to get it work.

I am using dedline 2.6 and FailOnExistingMaxProcess is False by default.

Set it to true returns as expected, no rendering if other MAX is running.

I am sure it must be the limitation of free version.

( Reset Scroll Bar in Slave window menu doesn’t work as well, it should clean up the output

window, I think. Anyway, that has nothing to do with my problem )



Chad: I think rendering partitions with krakatoa and sending Jobs with 1 frame make no sense.

I think, sending Jobs with the full animation range is the way, so PFlow is always updating only the next frame.

Otherwise, by sending a frame 50 to slave X it updates all the previous 49 frames to get the result.

Not if partikel are cached of course, than it makes no different.



by the way… how is krakatoa handling the amount of particles for every partition?

For example, I have 500k particles, at least krakatoa is saving 2 partitions.

Are they 250k particles on every partition or, 500k each?

Increment Position/Spawn/Speed Seeds makes only sense in case my PFlow is caring 500k particles

and krakatoa save 2 partitions 500k each, so particle not overlapping.

If not, splitting 500k PFlow particle on 2 partitions dont need this option.



In case it is working as I think, small PFlow simulation of 500k p. is not so bad for the performance,

saving 100 partitions “increment seed” gives 50M particles and that is nice :slight_smile:



[ EDIT ] OK, I see… it is like it should be :slight_smile: [ EDIT ]




Thanks again



Bandu

This is correct, saving 500K particles in 10 partitions gives you 10 sequences of 500K each, with different seeds they blend nicely as a single 5M particle system when loaded in a PRT Loader.



We usually save 5M x 10 to get 50M, but 100x 500K is also an option.



Of course Chad is right, using Concurrent Tasks for saving partitions doesn’t make that much sense. I am talking to the Deadline developer about what could be done in the future to parallelize the process…



Cheers,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

Looks like we found a solution.



It involves creating a single job on Deadline with as many tasks as partitions. Each task is actually a MAXScript job, and the script in each task takes the current time, increments the seeds based on the current time, then calls render() for the desired frame range (which can be different from the task number of the job, obviously) and thus saves a complete sequence of PRT files within the single task. This also ensures that all frames are processed consecutively. The only drawback is that if it crashes, the whole task would fail and would have to start from the beginning, thus potentially wasting time.



When set to Concurrent Tasks 4, it runs 4 copies of Max in slave mode on the same machine and calculates a full particle flow on each CPU, saving 4 streams of particles in parallel!



Thanks for pushing us hard enough :o)




Of course Chad is right, using

Concurrent Tasks for saving

partitions doesn’t make that

much sense. I am talking to

the Deadline developer about

what could be done in the

future to parallelize the

process…





Cheers,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

4 partitions show up as 4 tasks? And all assigned to one machine?



Downside would be lack of feedback. Can you update the progress percent in the task pane as frames in the task are completed?



So is this waiting for a new Deadline or a new Krakatoa?


  • Chad

4 partitions show up as 4

tasks? And all assigned to

one machine?



Each task can be assigned to the same or a different machine, depending on the Concurrent Tasks option.



If you send it with Concurrent Tasks 1, each machine will pick one partition and run with it.



If you set the Concurrent Tasks to 4 and limit to CPUs on, machines with 4 CPUs will pick 4 partitions at once and run 4 max copies in parallel, outputting 4 partitions at a time. If a machine happens to have 2 CPUs, it will pick two partition tasks…









Downside would be lack of

feedback. Can you update the

progress percent in the task

pane as frames in the task are

completed?



I can do feedback if I run the render() method in a loop. Right now I don’t, but I could do it. Actually I WILL HAVE TO do it, because if you sent a partition job that requested only frames 10-20,35,42 (using the custom Frames field in the Render Dialog), I should be processing only those, even if it makes no sense…



The real downside is that if a task fails, all frames of the partition will start saving from the beginning.





So is this waiting for a new

Deadline or a new Krakatoa?



New Krakatoa. Possibly Tuesday, if nothing else breaks. It might require an update of the Submit Max To Deadline script to v.2.7 but that is easy to deploy.



Cheers,



Borislav “Bobo” Petrov

Technical Director 3D VFX

Frantic Films Winnipeg

Is it possible to get Deadline to understand the PRT output so that each task, being a partition, could should up in the “Copy Render Path” part of the task pane right click menu?


  • Chad

Is it possible to get Deadline

to understand the PRT output

so that each task, being a

partition, could should up in

the “Copy Render Path” part of

the task pane right click

menu?


  • Chad



    Done!

    Will also appear in the job’s menu in Explore Output and Copy Output Path.



    There is currently no viewer for PRT files, so double-clicking a task will pop up a message that the viewer cannot open it.



    Cheers,



    Borislav “Bobo” Petrov

    Technical Director 3D VFX

    Frantic Films Winnipeg