We get this error often when trying to render C4D Redshift on the farm.
Error: FailRenderException : RenderTask: Unexpected exception (RenderDocument failed with return code 1 meaning: Not enough memory.)
It says “Not enough memory” but later in the log it mentions a license issue.
2021-06-04 17:51:25: 0: STDOUT: Redshift Error:
License mismatch. Please contact support@redshift3d.com and include this log file as well as your floating license file
2021-06-04 17:51:25: 0: STDOUT: Redshift Error:
Rendering aborted due to license failure
The weird thing is, it’ll error like this a handful of times on a node but then also render on that node on the same render job. Is this really a memory issue or are Redshift licenses crapping out? I seem to recall a different error when we run out of Redshift licenses.
It is hard to tell what came first - the not enough memory error, or the license error.
You say that the license error appears later in the log, so chances are the memory issue is the actual cause for failure. Especially since it happens only sometimes and not consistently.
That being said, it would be nice to know more about both your licensing setup, and your GPU setup.
Are you using Redshift floating licenses or Redshift UBL?
How many licenses do your have?
Have you configured a Redshift limit to avoid Worker failures due to insufficient licenses?
What graphics cards/GPUs do you have on these machines, and how much VRAM do they have?
Do you have any stats how much memory is required by the particular scene (Redshift can do out-of-core rendering to some extent, how likely is it that it is running out of memory?)
Have you tried running a super simple Redshift scene that is guaranteed to not use up all your memory to see if all machines succeed? If the problem is licensing, you would expect it to reproduce even with a sphere on a plane and a single light setup…
I had a similar problem this week, scene renders fine on node inside c4d, crashes on deadline.
The solution for us was taking out two old 980TI and only leaving two rtx2080ti.
no idea why it only happened on deadline but was rendering fine in c4d
Are you using Redshift floating licenses or Redshift UBL?
Floating
How many licenses do your have?
Around 110 across 4 sites
Have you configured a Redshift limit to avoid Worker failures due to insufficient licenses?
No limit because our licenses are spread across 4 sites and 4 separate Deadline farms and workstations. But when we do run out licenses, we get a different error that specifically states license failure, usually something like this “Redshift Error: License error: (RLM) All licenses in use (-22)”. We’ll also have issues with the IPR on the workstation. In this particular example, that has not come up and according to our license logs, we are not maxed out.
What graphics cards/GPUs do you have on these machines, and how much VRAM do they have?
Most of the boxes have dual 1080s but not all boxes. The errors seem to be inconsistent between machine configuration too.
Do you have any stats how much memory is required by the particular scene (Redshift can do out-of-core rendering to some extent, how likely is it that it is running out of memory?)
Don’t know but I can look into it. I’ve just seen this often on our farm across many different jobs.
Have you tried running a super simple Redshift scene that is guaranteed to not use up all your memory to see if all machines succeed? If the problem is licensing, you would expect it to reproduce even with a sphere on a plane and a single light setup…
I will try running a test when I get a chance. I’m starting to think it is a memory issue. Most of our farm is made up of workstations. Sometimes users will slave their machines while working and other times they will just log out to add their machine to the farm. But this particular example is from machines that have no users logged in, so it’s not like they’re competing for resources.
I will gather some more info on this when I get a chance. Thanks.
Ok, finally got around to testing a simple scene which runs through no problem. I noticed in the logs that if I sort by Peak Ram, that shows me all the ones that errored. So it’s definitely looking like a memory issue.
Sorted by Peak Ram. This makes sense with 1080s and vram at 8gb.
In most rendering cases, Deadline is just reporting the bad news, but the actual errors come from the rendering application. Also, Deadline has no knowledge of the VRAM situation. So it is super hard to know why a task would fail on a machine and then magically succeed on the next try on the same machine. I am not sure even MAXON or the Redshift team could answer that…