Bug 1699471
Summary: | test to ensure we never have more than 20 revisions | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Luis Sanchez <sanchezl> |
Component: | kube-apiserver | Assignee: | Stefan Schimanski <sttts> |
Status: | CLOSED ERRATA | QA Contact: | Xingxing Xia <xxia> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | 4.1.0 | CC: | aos-bugs, deads, gblomqui, jokerman, mfojtik, mmccomas |
Target Milestone: | --- | ||
Target Release: | 4.2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-10-16 06:28:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Luis Sanchez
2019-04-12 19:43:34 UTC
We need this to prevent future hotloops like we saw in the kube-scheduler-operator Do we want to set a limit to the prune controller too that keeps the maximum revisions at <=20 along with the test? Right now the the Succeeded/Failed revision limits could be set higher than that with no problem, with I think 0 or -1 actually being unlimited revisions. Is there more info on what this is for? @Mike, I my mention of pruning was incorrect/irrelevant in my original description. This is more about a fresh cluster who, already has over 20 revisions right after install, strongly suggesting that there is some hotlooping going on. Ah, that is interesting considering that by default we set the limit at 5/5 succeed/fail revisions. It seems to me like occasionally older revisions may get stuck in the "InProgress" state (eg, I have a cluster right now with 9 revisions, but somehow #4 never updated past InProgress). We don't count InProgress revisions toward any limit, so there might also be some work here to take those into consideration or refine our logic for determining success/fail. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922 |