Bug 1128008

Summary: track log file size in Beaker
Product: [Retired] Beaker Reporter: Dan Callaghan <dcallagh>
Component: schedulerAssignee: beaker-dev-list
Status: CLOSED WONTFIX QA Contact: tools-bugs <tools-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 0.17CC: azelinka, cbouchar
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-12 20:34:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dan Callaghan 2014-08-08 06:01:38 UTC
Beaker does not currently record the size of job logs. This would be useful information, for finding logs consuming a large amount of disk space and for implementing a log storage quota system in future.

We don't want to upload the log file size on every chunk (since that would mean an extra call LC->scheduler for every chunk). beaker-transfer could record the log file size when it moves the files to the archive server. The downside to that approach is that we won't have any log file size information while the recipe is running (only after it is finished), and Beaker installations which aren't using beaker-transfer will never have log file size information at all.

Comment 3 Nick Coghlan 2014-08-08 06:08:22 UTC
Could beaker-watchdog track it?

Comment 4 Dan Callaghan 2014-08-08 06:14:17 UTC
The problem is that the LC daemons are (quite intentionally) stateless, so the only place to track it is in the Beaker database itself. Updating that requires a call to the scheduler, which we don't want to do for every log chunk.

Comment 5 Nick Coghlan 2014-08-11 08:06:47 UTC
Could we have a scheme where we rate limited the size updates based on the different between the current size and the last reported size? It would mean some runtime state in the watchdog daemon (to remember the last reported size), but it could safely reset to zero if the daemon was restarted.

Comment 6 Dan Callaghan 2014-08-11 08:40:50 UTC
(In reply to Nick Coghlan from comment #5)
> Could we have a scheme where we rate limited the size updates based on the
> different between the current size and the last reported size? It would mean
> some runtime state in the watchdog daemon (to remember the last reported
> size), but it could safely reset to zero if the daemon was restarted.

I'm not sure that would be worth the effort. It would mean that the recorded size in Beaker would not be reliable. We don't know when a log is "complete" (there's no such notion) so beaker-watchdog would have to just decide after a timeout that a log has stopped growing and its size should be updated -- but if beaker-watchdog is stopped before that happens, and no more chunks are uploaded after that, then the size will never be updated. So beaker-transfer would still need to update the size on log archiving anyway, at which point I don't think you have gained much by making beaker-watchdog do it as well.