In the must-gather dump we should include the namespace of the service the webhook is pointing to. Check mutating or validating webhook configuration look for: webhooks.clientConfig.service and gather that namespace data.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Whiteboard if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
Verified with the bits below and i see that the fix works fine. [knarra@knarra tmp]$ ./oc version Client Version: 4.12.0-0.ci.test-2022-10-07-105950-ci-ln-kshnd6b-latest Kustomize Version: v4.5.4 Server Version: 4.12.0-0.nightly-2022-10-05-053337 Kubernetes Version: v1.25.0+3ef6ef3 Below are the steps followed to verify the fix: =============================================== 1. Build a payload from the PR using the command build openshift/oc#1258 2. Once the build is available, extract oc from the payload using the command 'oc adm release extract --command=oc registry.build05.ci.openshift.org/ci-ln-kshnd6b/release:latest --to=tmp/' 3. Now run the command 'oc get ValidatingWebhookConfiguration' and look through the yamls of all the objects displayed to see what all namespaces they are present in 4. Repeat step 3 for 'oc get ' and see what all namespaces they are present in 5. Now run the command oc adm inspect clusteroperator/kube-apiserver command 6. Verify that the inspect has all the namespaces present that are seen as part of step3 & 4. Examples: ============== [knarra@knarra tmp]$ oc get ValidatingWebhookConfiguration NAME WEBHOOKS AGE alertmanagerconfigs.openshift.io 1 43m autoscaling.openshift.io 2 50m controlplanemachineset.machine.openshift.io 1 50m machine-api 2 50m multus.openshift.io 1 52m performance-addon-operator 1 53m prometheusrules.openshift.io 1 43m snapshot.storage.k8s.io 1 51m [knarra@knarra tmp]$ oc get MutatingWebhookConfiguration NAME WEBHOOKS AGE machine-api 2 4h18m Below are the namespaces where all these webhooks are present ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1. openshift-monitoring 2. openshift-machine-api 3. openshift-machine-api 4. openshift-multus 5. openshift-cluster-node-tuning-operator 6. openshift-monitoring 7. openshift-cluster-storage-operator 1. openshift-machine-api On running oc adm inspect i do see that all the above namespaces are present in the inspect output ================================================================================================== [knarra@knarra namespaces]$ ls -l total 36 drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:22 openshift-cluster-node-tuning-operator drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:22 openshift-cluster-storage-operator drwxrwxr-x. 14 knarra knarra 4096 Oct 7 17:18 openshift-config drwxrwxr-x. 14 knarra knarra 4096 Oct 7 17:18 openshift-config-managed drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:19 openshift-kube-apiserver drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:19 openshift-kube-apiserver-operator drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:20 openshift-machine-api drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:21 openshift-monitoring drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:22 openshift-multus Below are the steps performed to reproduce the issue before the fix: ==================================================================== 1. Use the latest 4.12 client and server bits 2. Now run the command 'oc get ValidatingWebhookConfiguration' and look through the yamls of all the objects displayed to see what all namespaces they are present in 3. Repeat step 3 for 'oc get ' and see what all namespaces they are present in 4. Now run the command oc adm inspect clusteroperator/kube-apiserver command 5. Verify that the inspect does not have all the namespaces present that are seen as part of step3 & 4. oc adm inspect output: ========================= [knarra@knarra ~]$ cd inspect.local.5229439640076389689/namespaces/ [knarra@knarra namespaces]$ ls -l total 16 drwxrwxr-x. 14 knarra knarra 4096 Oct 7 17:06 openshift-config drwxrwxr-x. 14 knarra knarra 4096 Oct 7 17:07 openshift-config-managed drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:08 openshift-kube-apiserver drwxrwxr-x. 15 knarra knarra 4096 Oct 7 17:07 openshift-kube-apiserver-operator Based on the above setting the verified flag to tested.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.12.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7399