There is a way to query the short log from the api using:
Slaves.GetSlaveReports()
and a way to get the contents of ALL slave reports:
Slaves.GetSlaveReportsContents()
But i could not find a way to simply figure out where the log files live for a particular slave. Could you guys point me in the right direction?
In particular, looking for these entries - organized by slaves:
As long as the job is still on the farm, then we found going through the rest api, gave us the quickest way to get at this data:
#parse url() does the following:
#response = urllib2.urlopen(url)
data = json.load(response)
#
idx = 0
# so you will want something like this if you have events that runs on a job or task event, as they will show up here
if not include_events:
# now we need to find the first index, that is not an event log
get_url = "http://{0}:8080/api/taskreports?Data=all&JobID={1}&TaskID={2}".format(webservicesnode,
jobid, taskid)
result = self.parse_url(get_url)
if result:
idx = 0
# go throught tasks from newest to last
for log_d in result:
if self.debug: print 'result line::',log_d
if not 'Event Log' in log_d.get('Title'):
#only break out if we are no longer a event line
break
idx = idx + 1
log_result = idx
#
get_url = "http://{0}:8080/api/taskreports?Data=alllogcontents&JobID={1}&TaskID={2}".format(webservicesnode, jobid, taskid)
result = self.parse_url(get_url)
# this is a list
#print 'result:',pprint(result)
# the first entry is the most recent
if not len(result) == 0:
last_log = result[log_result]
# do stuff with the log here
If anyone has a faster way to do this im all ears.
slave_name = " <some slave name> "
reports = RepositoryUtils.GetSlaveReports(slave_name).GetSlaveReports() // Yeah, the repetition is a thing
for report in reports:
report = RepositoryUtils.GetSlaveReportLogFileName(report)
print("Your report is here: {0}".format(report))