AWS Thinkbox Discussion Forums

Script Vector Channel?

Krakatoa beta 17, Max 9 x64, Deadline 2.7



ERR: Could not read/write file type: GetProperty()0000

ERR: max3d_particle_color: Invalid string “Script Vector Channel”



I’m saving out Position, Velocity, and Map Channel. Any guesses as to what might be going on? I’m going to try narrowing this down, but didn’t know if these messages helped.


  • Chad

Ah, it happens locally as well.


  • Chad

Perils of beta.



Changing from Krak to Scanline back to Krak fixed it. This was a file from beta 15 or so, so there may have been a naughty leftover bit. Shouldn’t affect release versions.


  • Chad

Perils of beta.



Changing from Krak to Scanline

back to Krak fixed it. This

was a file from beta 15 or so,

so there may have been a

naughty leftover bit.

Shouldn’t affect release

versions.


  • Chad



    Thanks, I had the sneaky suspicion this could happen some day because we were changing so many names lately.



    Please try out Beta 18 I just posted - we tried to fix a couple of issues since 17.

Ok.



ERR: Could not read/write file type: \anatomical\prj\RO3203\scn\05_0100_Kidney_Establish\PRT\Test_A06\MeltingFatTest\PerformanceTesting__part2of10_0000.prt



ERR: Cannot invert matrix [(0, 1.16323e-033, 0, 0),(1.88047e+024, 0, 0, 0),(0, 0, 0, 0),(0, 0, 2.10195e-044, 1)], it is only of rank 2



Haven’t seen those before, works fine locally.


  • Chad

Not sure about the first one.

The second one happened before without a camera. Should not happen if you have a valid camera for your view.

It is logged as known bug in the beta 18 submission as:


  • Rendering non-camera views on Deadline causes errors.

    If you DO have a camera, than it is a new problem.

    We have a developer working on the problem. Should be fixed soon. (I hope)
  • Oh, I was just saving PRT’s out, so I didn’t even think about cameras. I’ll check it.


    • Chad

    OK, added a camera and the matrix thingy goes away.



    The other issue might be with Box3. Deadline reports…



    Trapped SEH Exception in CurRendererRenderFrame(): Access Violation

    Process: C:\Program Files\Autodesk\3ds Max 9\3dsmax.exe

    Module: C:\Program Files\Autodesk\3ds Max 9\plugins\ParticleFlowTools\ParticleFlowSubOperators.dlo

    Date Modified: 04/09/2007

    Exception Code: C0000005

    Read Address: 00000027

    Instruction: 0F BE 0E 85 C9 0F 84 44 07 00 00 83 E9 01 0F 84

    Call Stack:

    0A590000 C:\Program Files\Autodesk\3ds Max 9\plugins\ParticleFlowTools\ParticleFlowSubOperators.dlo

    +0016AE50 Exception Offset



    So I’m going to see if there’s something funny in my Data Operator.


    • Chad

    Renders locally just fine.



    Crap.


    • Chad

    Ok, it’s something to do with the number of particles in box3. Smaller amounts render fine on the network. I’ll pass this off to Oleg.


    • Chad

    Oh, I was just saving PRT’s

    out, so I didn’t even think

    about cameras. I’ll check it.







    In Beta 18, we decided that saving particles should be identical to rendering particles with regard to color calculations. Since some methods (like blended Z depth etc.) require a camera to calculate the color and density, we now handle saving just like rendering and some bugs related to cameras might pop up in the saving process until we fix them generally.



    Also the writing to Vertex Color channel mentioned in the submission is NOT the solution we promised you - next build will have a dedicated PFlow operator which will enable the copying of Scripted Vector Channels into Color and Scripted Float Channel into Density so you will just have to drag one to the flow to enable the fast shading behavior.



    In beta 18, you can use the Data Operator to write colors to the Vertex Colors channel and it will shade directly much faster if you check the new “Ignore Materials” override. But this is just a temp. solution until next time.

    Not sure about the first one.

    The second one happened before

    without a camera. Should not

    happen if you have a valid

    camera for your view.

    It is logged as known bug in

    the beta 18 submission as:



    Rendering non-camera views on

    Deadline causes errors.



    If you DO have a camera, than

    it is a new problem.



    I got that error if I DID have a camera, but it was not the active view. Perhaps there should be some sanity check for that too?


    • Chad

    Not sure about the first one.

    The second one happened before

    without a camera. Should not

    happen if you have a valid

    camera for your view.

    It is logged as known bug in

    the beta 18 submission as:



    Rendering non-camera views on

    Deadline causes errors.



    If you DO have a camera, than

    it is a new problem.



    I got that error if I DID have

    a camera, but it was not the

    active view. Perhaps there

    should be some sanity check

    for that too?



    ]



    Nope, the bug was fixed today, in the future you should be able to render from any view on Deadline. Some data was getting lost when passed to the Max Deadline plugin…

    Ok, how about the “ERR: Could not read/write file type: \…blahblah…__part1of10__part3of10_0009.prt”



    It occuring with a “Timed out waiting for the next progress update - consider increasing the ProgressUpdateTimeout in the plugin configuration”


    • Chad

    Ok, how about the “ERR: Could

    not read/write file type:

    \…blahblah…__part1of10__p

    art3of10_0009.prt”





    Will have to talk to the guys to see what could be causing this - have not seen it yet. (doing some Krakatoa on Deadline tests myself right now).



    So the same file can be read locally and only happens on Deadline?


    Eh… Here’s a question…



    What does Deadline need for a progress update? It’s not watching buckets go 'round, it’s waiting for the particles to evaluate. If Deadline can’t see that happening, will assume that nothing is happening and that the process has hung?



    Right now my tasks are running close to 2 hours per frame.


    • Chad


    EDIT: Oh crap. Sure, it fails on the 5th frame, then moves on to frame 6. Now it has to calculate not 1, but 6 frames, at 1:40:00 per frame, that's going to be way over the update timeout, which was set at 8000, and I upped to 16000, but even that won't be sufficient. If it's not getting updates on the particles evaluating frames, then we're definately screwed.
    Problem is, I really need to have Deadline force sequential, which it isn't, and I need it to either get progress updates from pflow, or barring that, have a progress update timeout for particle partitioning that's separate from the progress update timeout for regular rendering. If we removed the progress update timeout entirely, our Brazil renders would all hang machines indefinately.

    I cleared the error logs and even re-queued the whole job and set the first few frames to be completed, but the slaves insist on rendering out of sequence.


    • Chad

    When Krakatoa is loading particles or doing other things, it updates the progress and can cancel. When particle flow is updating, Krakatoa doesn’t get any feedback, so there are no updates until particle flow returns. I asked Oleg if there was a way to get the progress from particle flow, but he didn’t know a way so it’s likely not possible.



    -Mark

    I cleared the error logs and

    even re-queued the whole job

    and set the first few frames

    to be completed, but the

    slaves insist on rendering out

    of sequence.



    You definitely have the ‘enforce sequential rendering’ flag enabled on this job? If so, that sounds like a bug in Deadline. I’m not aware of it having in 2.7 before.



    -Mark

    Yeah, I enabled it at submission, and I verified it was checked in the job properties.



    It only has the problem if it encounters an error, but in this case the error was it not finding the FlexLM server, so it would have been best for it to just re-try the same frame.


    • Chad
    Privacy | Site terms | Cookie preferences