+++ This bug was initially created as a clone of Bug #2065547 +++ This is request to gather particular error message logs from kube-controller-manager containers. See for more info: https://issues.redhat.com/browse/CCXDEV-7472 https://issues.redhat.com/browse/WRKLDS-358
Verified on 4.10.0-0.nightly-2022-03-23-025121. Steps to reproduce: 1. Create a new namespace called dc-test 2. Get the uuid from the yaml definition of the newly created namespace. 3. Create new DeploymentConfig - you have to pass your uuid there in the ownerReference: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-aaa namespace: dc-test ownerReferences: - apiVersion: v1 kind: namespace name: dc-test uid: <PUT YOUR UUID> spec: selector: app: httpd replicas: 3 template: metadata: labels: app: httpd spec: containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 4. Stop the openshift-api with oc patch openshiftapiservers.operator.openshift.io cluster --type merge --patch '{"spec": {"managementState": "Removed"}}' 5. Delete the dc-test namespace. 6. Start back the API so that you can read the logs (not sure if it's required) oc patch openshiftapiservers.operator.openshift.io cluster --type merge --patch '{"spec": {"managementState": "Managed"}}' 7. Restart IO 8. Download archive and navigate to config/pod/openshift-kube-controller-manager/logs/{pod-name}/errors.log Check if logs contain "syncing garbage collector with updated resources from discovery (attempt 1):"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.10.6 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1026