Okay, no luck over here for a few reasons. It’s complaining about “/out/logo/merge1” over here, and there were a very large number of unknown attributes when the hip file loaded.
Maybe it’d be better if we set up a call since this is so hard to reproduce? It’s been fairly quiet over here while people are ramping back up after the break.
Is it because the paths are all missing? To make our files work with deadline we have to hard code all the paths so they need to be re-pathed once you’ve opened it. It doesn’t work if we leave it with the relative $HIP variable.
You need to globally set and use something like $JOB, i.e. something which is /not/ reset on every invocation of Houdini. In other words, set $JOB to the root of your project (which has hip/geo/tx/etc. subdirs under it), and make all file paths relative to $JOB.
OK! So I’ve uploaded a new file, I’ve included all the dependent files this time (sorry, I left some out before) and removed most of the things you don’t need. I’ve also set it up with the $JOB variable (thanks Antoine that seems to work ). So you should be able to open the file and then set the project (File->Set Project) to the folder you’ve saved it into. The paths should all work relative to that path then, including the outputs. The output node you need to select when submitting is “particles”.
It looks like it’s getting stuck on frames 32 and 33 (I’m using concurrent tasks). I’ll reboot in case something weird happened when my workstation went to sleep.
Any update? I’m kind of hoping it’s erroring for you now…! The job does seem to change up the frames it errors on but generally it’s been from frame 70 onward. We run slaves with concurrent tasks and without and both failed in the same way. Are you running it on a Mac as well? We’re on Windows 7 but I’m not sure if this would make a difference…
Welp, Windows got further, but it’s crashed some 4,300 times to get to frame 250ish. All of my errors are the memory allocation problem, so I’ll move it to the beefiest machine I have access to and see where it goes from there
I get the exit 139 lots, but not one instance of exit code 1 yet.
That’s interesting. I didn’t think there was a logic to the erroring regarding memory but I did notice some weirdness. When the tasks work at max they peaked at 600MB of memory. The failed jobs peaked at anywhere from 1-35GB.
Sorry, didn’t get to this yesterday. Hit some fires this week, so thanks again for poking me.
At this point, I have no idea why Houdini is throwing these memory errors. I think given the post over at forums.thinkboxsoftware.com/vie … 11&t=15076, I might need to downgrade my machine to something much earlier and see if it makes a difference. The test should only be 30 minutes, I just need to get a block of time. My day’s compressed today, but Monday should be business as usual!
Have you tried turning off Deadline’s drive letter mapping? Maybe it’s seeing Z: or something like that in the binary portion of the data and (unwisely) modifying it.
So, update from the other thread, it seems that we might be reading the binary IFDs in as ASCII and writing them out as UTF8 after an update someone did. That’d explain weird behavour since any byte bigger than 127 would take up two bytes and be completely unreadable for whatever was parsing it in. That doesn’t really explain why it worked at all though.
Are you using path mapping, and if so, can you try turning it off? Might break in new ways then (hopefully with a path not found and not this stdin business).
To talk about the problem a little more, the plan is to change the core code to try and match the output encoding to the import encoding.