FumeFx-PFlow-Krakatoa-Deadline Problem

I’m trying to do this:

Simulating with fumeFx, getting the sim into pFlow with fumeFx Birth operator and fumeFx follow. then partition with krakatoa through deadline onto the renderfarm.

Output is files 1kb of size and of course no particles inside.



Tried this local and works like a charm.



Also tried to do without fumefx birth, only with fumefx follow operator and still bad results.

My concern is that it’s not working in the slave mode that the rendefarm uses. Or i am doing something else wrong.

I’m trying to do this:

Simulating with fumeFx,

getting the sim into pFlow

with fumeFx Birth operator and

fumeFx follow. then partition

with krakatoa through deadline

onto the renderfarm.

Output is files 1kb of size

and of course no particles

inside.



Tried this local and works

like a charm.



Also tried to do without

fumefx birth, only with fumefx

follow operator and still bad

results.

My concern is that it’s not

working in the slave mode that

the rendefarm uses. Or i am

doing something else wrong.





You are not doing anything wrong. You won’t believe it but we had the SAME problem this night - one of our artists sent a partition job to Deadline in the same configuration like you described and got the same problem.

Works locally, refuses to work on Deadline. We tried both with and without the Backburner option checked in Fume and no luck.



We have had this kind of problem with Fume for months. The previous 1.0 version of Fume actually overwrote its own data files when sent to Deadline, now that we updated to the latest build a couple of weeks ago it does not do that but still refuses to work on the network.



We are not exactly sure whether this is a pure FumeFX problem, a combination of FumeFX and Deadline problem, or a combination of FumeFX+Deadline+Krakatoa problem, although all signs point at the former.



I am about to discuss this issue with both the Deadline and Krakatoa developers and see if we can talk to Sitni Sati about it, too.



At this point, if you have FumeFX in your scene, please try to use Local Partitioning until we figure out a better workaround.



Note that since PFlow is single-threaded, you can launch as many copies of Max as you have cores/CPUs on the same workstation and run multiple local partitioning operations on sub-ranges of the whole range. For example, I have 4 CPUs here and I can run 4 copies of Max, each doing a portion of 100 partitions, for example from 1-25, 26-50, 51-75, 76-100…



If we find out any more details on the cause for this problem, you will be the first to know!

thanks again for the prompt response! we will try also for a workaround.

how about this:

Setup fumeFx->Pflow then create one partition by local simulating it, then disable the fumeFx pflow, load that partition with krakatoa’s pflow operators and then “submit to deadline” another 10 - 20 partitions.

how about this:

Setup fumeFx->Pflow then

create one partition by local

simulating it, then disable

the fumeFx pflow, load that

partition with krakatoa’s

pflow operators and then

“submit to deadline” another

10 - 20 partitions.



Does not work.



The Krakatoa Operators cannot randomize Seeds - you need the original PFlow with its operators to be able to create usable partitions. Read some more on how Partitioning works here:

http://www.franticfilms.com/software/support/krakatoa/high_particle_counts_tutorial.php



This is something we and most probably Sitni Sati have to address because it has been a problem with FumeFX since day one. Running several local partitions is the way to go right now until this is fixed.

Note that you don’t even need Krakatoa licenses for that - you can run local partitions on any number of workstations in Evaluation mode. The only limitation of Evaluation mode is a watermark on the image and no network support. So if you have a couple of workstations sitting around, or multiple cores in a single machine, you can launch as many local partitions as you can want.

yeah, we are doing that, running multiple sims on the same machine in evaluation mode.









____________________________________________________________________________________

OMG, Sweet deal for Yahoo! users/friends:Get A Month of Blockbuster Total Access, No Cost. W00t

http://tc.deals.yahoo.com/tc/blockbuster/text2.com

Oh man. I thought I was going nuts. I didn’t think it was a fume thing. I thought it was a Krak. 1.1 thing. I finally got krakatoa partitioning on our own renderfarm software a few weeks ago with the help of the backburner script that Artur Leão wrote. I tried it again after 1.1 installed, and it wasn’t working over the network.

I still have to double check, but this system IS using fume birth / follow, so that’s probably the reason it wasn’t working. …In the meantime I could set it up to run this as a maxscript job instead of a render job… that way it would be the same as doing it locally as far as krakatoa / fume know.

I still have to double check, but this

system IS using fume birth / follow, so

that’s probably the reason it wasn’t

working. …In the meantime I could set

it up to run this as a maxscript job

instead of a render job… that way it

would be the same as doing it locally as

far as krakatoa / fume know.



This might work as a script job ONLY if the job is running in Workstation mode as opposed to slave mode. The Krakatoa partitioning we implement for Deadline is a MAXScript job but since it runs in Slave mode, it shows the same problem. In fact, I have experienced problems with Fume partitioning even locally on my workstation sometimes. I did the last job that required Fume-driven particles on my own machine, running 4 copies of Max on 4 CPUs overnight… We really should discuss this with Kreso.

Yeah, krakatoa would run in workstation mode if i were to implement it as a maxscript job using our software. We have enough licenses now to where it wouldn’t matter too much… i doubt we’d run across an issue. Even three computers is better than my one. Definetly talk to kres. about it… but until then, I have to get my shot done! =) I’ll probably just partition locally for now, unless I can take the time to redo part of the script.



Right now my script increments a seed, then submits the render, increments, then submits, etc. etc… Deadline is a little different in that you can do the “one job only” thing. I could actually set up my script similar to that, but this is way easier, and works just fine! …is there a way through maxscript to make it think it’s in slave mode though just in case this proves to be useful? For our farm a maxscript job actually opens max, and runs the chosen script. Rendering (saving particles) is just rendering mode… so it knows automatically through commandline or whatever that it should be in render mode, not workstation mode.

…is there a way

through maxscript to make it

think it’s in slave mode

though just in case this

proves to be useful?



Nope, and the slave mode is what seems to break Fume, even when the Backburner button is checked in the FumeFX UI.



As mentioned already, you don’t even need a Krakatoa license to do the saving in workstation mode, so you can run as many partitions as you have Max workstation licenses to spare. Only final image rendering actually needs licenses.

Ooops – double post. Wierd.

Well that’s freakin’ cool. Thanks for the tip! I’ll make sure to modify my script for that… if however, I am partitioning, and somebody else is rendering, will krakatoa be smart enough to not pull a license when I’m partitioning?

You know what? I just re read your post, and now I think I’m more confused. =) If I set it up as workstation-mode, krakatoa will still partition even if it’s in evaluation mode. If i tell it to save particles over the network, however, it would still need the render license wouldn’t it?

I am not sure what Krakatoa would show if you had a valid license server but all licenses were used up. I would expect some pop up to tell you that. In Deadline or BackBurner, you could register a pop-up handling routine to press the button for you. I assume your in-house solution will have something similar, otherwise you can use the same Interfaces as Backburner (exposed to MAXScript as DialogMonitorOps and UIAccessor Interfaces) to push the right button if that happens.



In general, when in Evaluation mode on a Workstation, all features work except for the watermark on the final image.

When rendering on the network in SLAVE mode, you NEED a license otherwise nothing will happen.



Since saving particles in Workstation mode does not produce an image with a watermark, it is practically for free. If you were trying to save particles in slave mode without a license, I would expect it to fail because network rendering is not supported in Evaluation mode at all.



So:



Rendering+Workstation+Licensed = true (WS license)

Rendering+Slave+Licensed = true (NW license)

Saving+Workstation+Licensed = true (WS license)

Saving+Slave+Licensed = true (NW license)



Rendering+Workstation+Evaluation = true+watermark

Rendering+Slave+Evaluation = false

Saving+Workstation+Evaluation = true

Saving+Slave+Evaluation = false




Note that Slave mode pulls from the network licenses which are a separate pool. The professional package contains 2 workstation licenses and 10 network licenses.

Thanks for the clarification Bobo, I think you’ve explained it more than well enough for me to figure it out now! =)



You’re the man!



–gonna go see later if TP works with Fume and stuff…