Apparently, openshift-apiserver-sa has dependency on SAR as part of it's healthcheck, which causes it to be restarted in case of kubeapi rollout in SNO. How reproducible: User cluster-bot: 1. launch nightly aws,single-node 2. Update audit log verbosity to: AllRequestBodies 3. Wait for api rollout (oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}') 4. reboot the node to cleanup the caches (oc debug node/ip-10-0-136-254.ec2.internal) 5. Wait 6. Grep the audit log: oc adm node-logs ip-10-0-128-254.ec2.internal --path=kube-apiserver/audit.log | grep -i health | grep -i subjectaccessreviews | grep -v Unhealth > rbac.log cat rbac.log | jq . -C | less -r | grep 'username' | sort | uniq Actual results: ~/work/installer [master]> cat rbac.log | jq . -C | less -r | grep 'username' | sort | uniq "username": "system:serviceaccount:openshift-apiserver:openshift-apiserver-sa", Expected results: It should not appear Additional info: Affects SNO stability upon api rollout (certificates rotation)
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
The LifecycleStale keyword was removed because the bug got commented on recently. The bug assignee was notified.
health check for openshift apiserver should not involve a SAR to the kube-apiserver. can you share the full audit entry in question? please remember, the audit level should be at AllRequestBodies as you have done originally.
Dear reporter, As part of the migration of all OpenShift bugs to Red Hat Jira, we are evaluating all bugs which will result in some stale issues or those without high or urgent priority to be closed. If you believe this bug still requires engineering resolution, we kindly ask you to follow this link[1] and continue working with us in Jira by recreating the issue and providing the necessary information. Also, please provide the link to the original Bugzilla in the description. To create an issue, follow this link: [1] https://issues.redhat.com/secure/CreateIssueDetails!init.jspa?pid=12332330&issuetype=1&priority=10300&components=12367637