AWS Thinkbox Discussion Forums

AWS Portal Asset Server "root X:\ does not appear to exist" error

Not sure what to do here. I have the following mapped networked drives listed in the asset server list (U:/, V:/, W:/, M:/)

A bit more context and a bit of a bump:

-this occurs onsite, not at my remote location.
-all drives are mapped correctly on that machine, and the portal services were installed and update to date using that user info.
-this error occurs once I click on the infrastructure and try to start the instances

[attachment=1]asset server settings.JPG[/attachment]

[attachment=0]Screenshot 2018-06-20 09.13.21.png[/attachment]

any help would be great, I’ve been wrestling with this for a few days now and really want to get it sorted

Ok, weird…so even with no root directories, it still doesn’t like something?

Hi delineator.

Are the drives mounted in the user account that the Asset Server is running in? Windows drive letters are specific to specific user accounts, so if the drives are not mounted in that account, the Asset Server won’t be able to see them.

Hey!

I’m a member of the Deadline development team. I’m sorry to see you are having issues getting AWS Portal to work. I have a couple quick questions for you and then hopefully we should be able to get this fixed quickly. Would you be able to tell me which version of the Deadline Client you have installed and which version of Deadline AWS Portal? Also, is the machine that you are getting these errors on the same machine that the Deadline AWS Portal Asset Server service is running on?

Cheers,
Justin

Yes, drives are mounted in that account. Just as before, nothing has changed, I just updated to .17

Portal is .17, deadline client is 16.6 - both the latest I believe. Yes, the machine I’m getting errors on has the assest server and link service running, is also running my RC and the mongo and license server (he has many hats).

I can provide any other info necessary. I just did a clean install of the PortalLink and still had the same issue. I believe everything was working OK on PortalLink .16

Any thoughts on this? I always like to have AWS abilities in my back pocket as a fail safe, and I’m getting a bit nervous

thanks!

I think it’s a pretty good guess from Justin about the drives not being mapped. Often times drives will be mapped for one user and not another, so it may very well be that’s the issue here… Can you me a call and we can poke at it? I basically want to check if the drives are mapped for the user both where the Monitor is running as well as where the service is running. I’m hoping that we can get Launcher running as a service on the same machine and send remote net use commands at it to sniff out a possible problem.

Okay! Update for everyone here:

It was in fact that the user service didn’t have the drives mapped. We confirmed that by check the AWS Portal Asset Server log at “C:\ProgramData\Thinkbox\AWSPortalAssetServer\logs\awsportalassetserver_controller.log” where it lists not only the drives it currently knows about, but also mentions what drives it’s tried mounting via “deadlinecommand mapdrives”.

The fix in this case was to configure the drive mappings in the “Configure Repository Options” dialog (as super-user) and restarting the AWS Portal Asset Server via task manager’s “Services” tab.

We also chose to use the new “regions” options we introduced recently so that only a subset of machines would mount those drives.

To follow up on that, I am remoting into that same PC now: and drives are not mapped in explorer. If/when a job gets picked up, is that when they are mapped? Is it possible to keep them mapped in between such times, for the workstation user?

I had a small inkling of a memory that I couldn’t figure out mapped drives before, and maybe this is it.

edit: should have clarified - it looks like its mapping the path at render, and then undoing it when job is finished

Slaves should perform a mapping when they pick up a new task, but we shouldn’t be unmapping them after the render is done. At least I don’t think we have that in the core API and it doesn’t show that in the Slave logs. I wasn’t able to find any references to our “Unmap” utility function in Deadline anywhoo.

The AWS Portal Asset Server will also periodically do a mapping.

Yeah, i have to completely undo everything we did - it totally borked up our system. There is some issue with launcher running as a service and pulse, pulse is running but not ACTUALLY running and won’t pass commands thru (suspect that is causing the WoL issues from my other thread). And the whole path mapping is totally messed up too - it will unmap drives, or give errors similar to this one:

edit: should clarify, the user login, NAS login, and mapped drive login info under repository options are all the same

edit2: few observations - it does not like you to have ANOTHER drive mapped to the same network server via windows AND deadline. My repository was sitting under a mapped R: drive - couldn’t have that auto map via deadline, because it will only map once connected to repository, and it can’t connect to repository because it can’t launch launcher because it can’t connect to the repository etc etc. You get the idea.

So my solution was to have R: mapped in windows as normal, and then one of my asset drives (W, located on the same server) would be mapped via deadline repo options/mapped drives. Nope, didn’t like that.

So I set the repo path to the UNC location, but it still doesn’t like this. Presumably because the UNC still requires a login, and because its on the same server, deadline won’t map W.

Create a custom region just for the path mapping on the local AWS Portal machine?

It’s true that the SMB protocol doens’t allow multiple mounts of the same share on the same machine so it’ll fight with user’s mapped drives.

Small update (that’s really more of a note to remind myself where we are at):

We narrowed the issue down to SMB connections being sucky. Basically, my repo is on a server named “fast”. I also map a few drives on that server. So I can’t use deadline’s mapped drive features because a connection is automatically made to that server (via unc path) when launcher is loaded, which means that subsequent drives can’t be mapped via deadline.

In the end, I THINK I should switch the server my repo is loaded onto, so its completely separate and isolated from my mapped drives. I’ll have to see what I can string together

After reading some suggestions re: SMB and windows, I tried to connect to the repo via ip address, and mapped drives via hostname (there was some thought that this would be ok, as windows wouldn’t be them together as the same server, so it would be allowed and not viewed as a single SMB server)

anywho, that doesn’t seem to be working - the slaves do not have any drives mapped, and are constantly dequeuing tasks - it just flips between “waiting” and dequeuing tasks" every half second, but nothing comes of it

Again, this is more just a note for myself to chart my progress.

slight update while only a singular vray job was rendering, paths were not mapped. When it was time for the assembly job, paths WERE mapped, and now they are rendering the next vray job. I wonder if paths are only mapped at the beginning of the job, but not tasks? Or maybe they are not being mapped when starting a vray job?

Me thinks drives are mapped before we load the application. I can’t remember seeing “mapping dive X:” when in batch mode. For paths, the Slave updates those at the regular 7 minute interval so I tend to stop and start them if I made changes (because I’m an impatient so-and-so).

Hmmm, I think something is off here too. I’ve waiting 7/14/21 minutes, and nothing. There are jobs renderings or waiting in queue (vray standalone), net use is empty, but I have one slave who will not map the drives. However, when a job comes up for tile assembly, then it seems to map drives, and then jump onto other projects.

Is there a way to check or verify this somehow? Logs?

updated: yeah, this is a confirmed issue on our slaves. Not sure why, but restarted the slave that was exhibiting this behavior, no map. Let it sit for a while (maybe an hour?) while other jobs were rendering, and no map. When an assembly job came up, it mapped, started work and then moved on the vray standalone jobs (which it could not pick up before). So I suspect something is awry with the vray job submission or something that is causing it to not map drives

Are you using asset dependencies on the jobs? I’m pretty sure we threw that Vrscene in there which would block the Slave from picking it up. There’s a bit of a catch 22 I supposed in that the drive mapping only occurs and plugin load time, and the scheduler thread is deciding whether or not that Slave should work on something as it doens’t know which plugin to load yet.

Now, from a performance perspective we likely don’t want to map the drive on every idle job scan as that happens every two minutes by default, and that may be hard on farms with thousands of machines… I’ll open an issue about it and see where that ends up.

bing! your right, I could totally see how that would be the case. Its not mapping the drive because it can’t find the asset to start the job to map the drive and round and round we go.

For right now, its seemingly OK, because of the power management/WoL issues means that our slaves are up and running 24/7 (hot!), so once an assembly job goes thru, they map, and then are able to render the next jobs.

Privacy | Site terms | Cookie preferences