+++ This bug was initially created as a clone of Bug #1745102 +++ We've found a layering violation between the openshift and kube control planes. SCC requires annotations on namespaces to set default UIDs to create pods and clusterresourcequota (CRQ) requires reconciliation to free quota to create pods. The controllers which do these things are in the openshift-controller-manager even though they have no logical openshift dependency. In 4.1, we partially fixed this by creating these resources as CRDs so they were always available, but we missed the controllers that are responsible for keeping these resources functional inside of the cluster. We need to pull the "openshift.io/namespace-security-allocation" and "openshift.io/cluster-quota-reconciliation" controllers into a spot above the openshift-apiserver so that our platform can continue to create pods even if part of the openshift-control-plane is down. Best option known option: new image used in a new container in the existing kube-controller-manager static pod. This gives us resiliency during disaster recovery that a normal pod would not provide us. It doesn't require a new operator or a change to topology and it does not complicate a rebase. We have migrated security and quota controllers to openshift/cluster-policy-controller. cluster-policy-controller runs in openshift-kube-controller-manager static pod. This 4.2 bug is to track the necessary leader-election-lock and rbac for upgrades from 4.2->4.3 as a result of the quota, security migration.
Ying Zhou, need your balance to help verify this bug, please check, thanks.
moving back to 'POST' so GH bot picks it up.
Confirmed with latest payload: 4.2.0-0.nightly-2019-12-02-165545, the issue has fixed: Steps: 1. Login as normal user, create project and apps; 2. Scale CVO to 0 and then scaling the OCMO and OCM to 0 ; 3. As the normal user, delete the running pods , will recreate pods succeed: [root@dhcp-140-138 yamlfile]# oc get po NAME READY STATUS RESTARTS AGE dctest-1-deploy 0/1 Completed 0 73s dctest-1-r8hcw 2/2 Running 0 63s [root@dhcp-140-138 yamlfile]# oc delete po dctest-1-r8hcw pod "dctest-1-r8hcw" deleted [root@dhcp-140-138 yamlfile]# oc get po NAME READY STATUS RESTARTS AGE dctest-1-7rncn 0/2 ContainerCreating 0 40s dctest-1-deploy 0/1 Completed 0 25m [root@dhcp-140-138 yamlfile]# oc get po NAME READY STATUS RESTARTS AGE dctest-1-7rncn 2/2 Running 0 2m49s dctest-1-deploy 0/1 Completed 0 27m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:4093