AWS Thinkbox Discussion Forums

Vray Standalone auxilary files in the repository

Hi,
When using the Vray Standalone submitter I don’t see any option to copy the input .vrscene file to the repository, so when I delete the job on deadline the input file does not get deleted.
I was thinking about making some custom solution for this problem, but Im not sure which approach would be the best.
If I use the command line I can send an auxilary file and that file will be copied to the repository, but the plugininfo parameter InputFileName wont know the location since that is not created yet. So after the submisson I can make a tool that updates the job in the mongodb with the correct InputFileName .
Is this ok, or there is some better solution?

Hmm. Looking at the V-Ray Standalone plugin it doesn’t support that, yeah. You’ll see from the code here at “[repo]/plugins/Vray/Vray.py”:

        # Get the scene file to render.
        sceneFilename = self.GetPluginInfoEntry( "InputFilename" )
        sceneFilename = RepositoryUtils.CheckPathMapping( sceneFilename )
        sceneFilename = PathUtils.ToPlatformIndependentPath( sceneFilename )

You’d probably want to change that first line to something like we have in Nuke.py:

        sceneFilename = self.GetPluginInfoEntryWithDefault( "InputFilename", self.GetDataFilename() )

Then you’d just submit the job with an extra parameter to deadlinecommand that’s the full path to the scene file. Info here.

If that works, I might try and sneak it into a merge request.

Thats the [Auxiliary File 1] parameter right?
If this works does it mean that I dont need to send InputFileName parameter in the Plug-in info file?

It works :slight_smile:
One more question though.
Now I dont send the InputFileName parameter in the Plug-in info file, so that field stays empty when I open it on the Monitor.
Is there some code change that can show the auxilary path there?

Actually, that’s by design. No job whose ‘job’ folder is empty will be able to show it.

I suppose a hacky workaround would be to include it in the job info file’s OutputFilename field and just pretend it’s immediately output when the job is submitted. Those fields aren’t read by the render plugins (for better or worse) so it should have no effect on the render process.

Instead of opening a new thread, any developments for this? Because of…everything…I figure I can probably get RCS going and have people submit remotely, but the issue is file sync. A job might be submitted with a UNC path or (same drive letter), but thats not guarantee that the vrscene file will be on the farms network. A solution would be like above (submitting to the repo directly from the vray standalone submission script) OR building in some sort of asset check?

What are you trying to do? Are you rendering on some web service? One thing that we do for rendering on aws is having the vrscene on s3 and then having a pre-render script to download the script locally, is this what you want to achieve?

Ideally, I’m trying to get some sort of RCS system up and running so we don’t need to establish VPN’s on everyone’s personal machines. The problem with that is the files are synced over (think dropbox, but not), so when I submit a vrscene from my workstation, the file is in place, however, at the physical location where the repo and workers are, that file may need a few more minutes to sync over so if that jobs goes ahead, it will fail out before the vrscene file gets there. (because it can’t find the vrscene file)

That’s the issue with the vrscene submitter right now, as it submits the file from the path, not directly to the repository (like smtd does). And we get a bunch of failed jobs because I don’t have a way to incorporate an automatic asset check so that the job is started ONLY when the vrscene file is in place. Hope that makes sense.

How does your sync works? When you submit a vrscene it gets saved on the local machine and then you have some sync tool (dropbox like) that moves the vrscene somewhere else?

Cant you “sync” the vrscene before you submit the job to deadline?

yah exactly. I think its a bit above the heads of some of some of the staff. we can’t ask them to monitor the sync and then only submit once its done. just a bit too much

Something you could do, is make use of Asset Dependencies so that the job won’t start until the required file(s) are in place.

Since you’re going for something that doesn’t rely on user interaction you could create an Event Plugin to auto-create the dependency based on the scene files associated with the job. You’d use SetJobRequiredAssets() to set the required asset paths. You’ll need to read from the plugin’s key/value pairs, so look at one of your existing jobs to see what that might look like.

Privacy | Site terms | Cookie preferences