AWS Thinkbox Discussion Forums

housecleaning not actually housecleaning

Seems like most of the housecleaning process is doing stuff like this:

2014-08-14 14:56:47: Job Cleanup Scan - Archived completed job "[SEA] SQU_039_1450_v0006_gts_CullRopes_images_render3d_Light-Fog-MG_0 " because Auto Job Cleanup is enabled and this job has been complete for more than 10 days.
2014-08-14 14:56:56: Job Cleanup Scan - Archived completed job "[SEA] SQU_039_1450_v0228_gts_CullRopes_images_render3d_Light-AllWET_0 " because Auto Job Cleanup is enabled and this job has been complete for more than 10 days.
2014-08-14 14:56:59: Job Cleanup Scan - Archived completed job "[YETI] MR_249_1000_v0020_lle_LessSpreading_images_render3d_FL-Oil_0 " because Auto Job Cleanup is enabled and this job has been complete for more than 10 days.
2014-08-14 14:57:01: Job Cleanup Scan - Archived completed job "[YETI] MR_249_1000_v0020_lle_LessSpreading_images_render3d_FL-FireFast_0 " because Auto Job Cleanup is enabled and this job has been complete for more than 10 days.

instead of catching hung frames, stalled machines etc.

Could the archiving (and other time consuming but not “required to have a healthy farm”) processes moved to a separate process?

Hey Laszlo,

You can configure the House Cleaning in “Configure Repository Options…->House Cleaning” on the House Cleaning tab so that there are a maximum number of archived jobs, auxiliary folder etc. If you want these to not monopolize the house cleaning process, you can lower all these maximums.

Hope that helps :slight_smile:

Ryan G.

It doesnt really, cause then these jobs never get cleaned out. We sometimes delete 10.000 jobs at a time. I would rather it just happened in another process in the bg, while crucial housecleaning could actually happen in a timely manner. We already have these set at a fairly low setting btw.

I know we’ve already split up the pending job scan operations from housecleaning, but perhaps we should also have a third thread that is strictly for purging/cleaning things up.

Thanks!
Ryan

Privacy | Site terms | Cookie preferences