Hi Laszlo,
I assume you already have the “Configure Repository Options…” > “Job Settings” > “Auxiliary Files” section populated with your “jobs” paths on your isilon?
I believe your desired configuration can be achieved by configuration of your Avere cluster.
You can configure your Avere global namespace to present itself as either a new namespace or override your existing pipeline namespace. It can present itself as CIFS as well as NFS (I assume all your Window clients are connecting via CIFS and not using Windows 7 Enterprise and it’s built-in NFS client). Either way, Avere can present itself as NFS or CIFS to your clients. (You will just get 5%, maybe 10% better performance with NFS compared to CIFS. That’s assuming your not using Windows 8.1 and the newer SMB version built-in?)
Anyway, you can configure your Avere cluster to allow “write-around” when client machines are submitting (writing) jobs, BUT “read-from” the Avere’s when pulling the Aux (data/3dsmax files, etc) files from the Aux file path location as configured in the repo options which really is pointing to your isilon cluster. Avere handles all the trickery at the CIFS/NFS level & global namespace level.
You could also configure the Avere cluster to make the Isilon Aux file path a “hot folder” in Avere speak, which means that it would recursively keep the contents of the Isilon Aux file path held in it’s highest storage/memory state (DRAM). However, with the number of jobs you guys have in your queue and I can only guess, contains very large Aux (data) files on occasion, then your DRAM cache will very quickly fill up and then the Avere will start populating the lower storage tiers, which in effect, would lead to potentially undesirable degrade on the Avere’s performance for the rest of the facility as they ‘work through’ the Avere cluster, reading/writing files.
So, couple of options for this last issue. Either configure a ‘cap’ on the total disk capacity of the Aux “jobs” directory to be held by the Avere OR as the Deadline Aux jobs data can be considered ‘transient’ data which is only needed/useful at the beginning of a job when the most number of slaves are trying to PULL this data, you could configure the Avere’s to have a FIFO /retention policy.
The logging/table views/graphing - analysis tools on the Avere are absolutely brilliant and will be able to provide you good feedback on how the configuration is working for you as well as including stats on currently “hot folders” & individual “hot machines”…so you will be able to see how well it is working or whether further tweaking is required.
There may well be other newer/advanced options in the Avere configuration (I’m no expert), so I would recommend speaking with your local expert
Mike