AWS Thinkbox Discussion Forums

HUGE pulse log files

We just discovered that Pulse had generated 10GB of log files last night. Unfortunately IT deleted all of them before I could steal one to dissect but they said they were plain-text.

Hopefully it’ll create a few new ones in the next hour or two so that I can pass along an extract.

I found out what it’s doing

C:\Users<username>\AppData\Local\Temp

is creating these log files every few minutes. But instead of adding to the existing log or creating a new log it’s dumping the last log into the new log as well as the current log. So it’s doubling. 6KB, 12KB, 23KB, 47KB–like the emperor and the rice on the chess board pretty quickly it gets up to the max file size.
Deadline Pulse 5.1 [v5.1.0.45029 R] - Exception - 2011-07-28 09-39-16.zip (9.27 KB)

How do I disable that pulse logging? I don’t see an option anywhere.

Have you tried deleting this file:

\sfs-file\repository5\limitGroups\000_060_000_5d2804c2.limitGroup

That’s the one causing the error, so deleting it should “fix” the issue for now.

We should also do 2 things:

  1. This error probably shouldn’t result in exception logs.
  2. Exception logs should only contain one exception. :slight_smile:

I deleted that file and it did stop the error messages… for now. But I would rather not have a ticking time bomb that can take out our email just waiting for the next random error to explode. :wink:

Definitely! We’ve already addressed the two issues I referred to in our internal version, and in the meantime, you should be able to prevent this from happening in the future by disabling Remote Error Reporting in the repository options.

Just a heads up. We’re turning off pulse since disabling remote seemed to just move the problem to the \ProgramData\thinkbox\deadline\logs\ folder instead. So whatever was dumping to temp is now just dumping to the logs folder instead.
deadlinepulse(Sfs-sbs)-2011-07-28-0001.zip (18.9 KB)

Fair enough. The issue should be resolved in beta 1, which we really hope to get out next week.

Any chance you still have one of those problematic limit group files in the repository (ie: \sfs-file\repository5\limitGroups\000_060_000_5d2804c2.limitGroup)? It’s strange that you’re getting corrupt limit group files as often as you are, and I just want to make sure we aren’t writing data to it that results in it being corrupted.

Yeah, I can pull it out of shadowcopy…

Looks like the folder didn’t exist–which is probably the problem.
000_060_000_5d2804c2.limitGroup.zip (556 Bytes)

Thanks! The missing corresponding folder actually isn’t a problem, as it will get recreated if needed. However, the 000_060_000_5d2804c2.limitGroup file’s contents are completely scrambled, which explains the error.

When Deadline writes an XML file, it first writes a temp file beside it (ie: 000_060_000_5d2804c2.limitGroup.Scanbrain_504) and then if that succeeds, it copies it over the original. Based on the files, Scanbrain was the last machine to update this limit file. I’m curious now if this machine is responsible for corrupting the other files as well, or if it’s a random problem…

You know, I didn’t have them check the timecodes on the “new” log files. Maybe they were the same old log files but just a second copy. I don’t see any other limit groups listed in any of the log files anywhere.

Ok I remembered I could open a shadow-copy of the old logs and they were very insightful. :wink:

I guess it wasn’t creating new logs what happened was that it ran out of space yesterday, I deleted the \temp\logs and Symantec then downloaded a GB of definition updates which then pushed us back over the edge for space and we only discovered the \program data\deadline\logs duplicates today. They were created yesterday at the same time and aren’t still growing.

The only corrupted limit group was that one that I attached. So it was just a case of one error causing a mammoth log file not a whole bunch of corrupted limit groups.

Thanks for confirming this! So with beta 1, this type of problem should be no more. :slight_smile:

Privacy | Site terms | Cookie preferences