| Summary: | /distribution/reservesys should wait if Updating status | ||
|---|---|---|---|
| Product: | [Retired] Beaker | Reporter: | Marian Ganisin <mganisin> |
| Component: | tests | Assignee: | beaker-dev-list |
| Status: | CLOSED WONTFIX | QA Contact: | tools-bugs <tools-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 23 | CC: | mjia |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-21 14:12:42 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Marian Ganisin
2016-08-23 12:11:27 UTC
I guess you would have been hitting this problem last week when the data migration was slowing down the scheduler, so that update_dirty_jobs was taking several minutes to run, right? Since Tuesday last week the scheduler was back to normal and processing status updates in ~20 seconds, so you should only hit this extremely rarely now. Dear Marian, thanks for your report. Based on Dan's reply I'm thinking of closing this bug, since it is due to the load of the data migration. I know it can be very frustrating of false alarms. Would this be acceptable? (In reply to Roman Joost from comment #2) > Dear Marian, > > thanks for your report. Based on Dan's reply I'm thinking of closing this > bug, since it is due to the load of the data migration. I know it can be > very frustrating of false alarms. Would this be acceptable? An alternative approach is to implement a kind of loop which waits until "known" state is available to avoid faulty behavior under any condition. Do as you wish. So the problem is "Updating..." is not a status, that's a hack in the web UI to avoid showing the current status from the database when we know it's wrong because the job is "dirty". ("Dirty" means that a status update is pending in beakerd.)
However in the recipe XML (which is what /distribution/reservesys is looking at, to determine if the previous task passed or not) we don't expose the "dirty" flag on the job, nor the "Updating..." status. Instead it just appears with the old values status="Running" result="New" until beakerd updates them.
We could probably make it loop until the result is something other than New. In theory an alternative harness can produce tasks with New result but I think none intentionally do that.
Dear Marian, we had another look at this. Dan pointed me to a discussion about the reservesys element which currently lacks RESERVE_IF_FAIL functionality. We think the better way out of this would be to equip Beaker to handle reservation in case of failure with <reservesys /> instead of adding more functionality around this task. Until we have a backlog item for this, I'll keep this report open. Dear Marian, we'd like to proceed with implementing the RFE from Bug 1100593 (Conditional reservation support for harness independent reservation) in favour of this bug. I've bumped the priority and think time spent on this support would benefit everyone than adding more hacks to /distribution/reservesys. Personally I'd like to close this bug as WONTFIX with reference to Bug 1100593, but I'm also happy to keep it open and close it when Bug 1100593 if you feel like it should be kept. Let me know what you think. Cheers! |