Description of problem: When trying to create a kubedescheduler instance i see that cluster pod does not get created and observed an error in the operator log. E0407 09:07:47.029627 1 runtime.go:78] Observed a panic: &fs.PathError{Op:"open", Path:"assets/kube-descheduler/operandserviceaccount.yaml", Err:(*errors.errorString)(0xc00007c040)} (open assets/kube-descheduler/operandserviceaccount.yaml: file does not exist) goroutine 345 [running]: Version-Release number of selected component (if applicable): [knarra@knarra openshift-client-linux-4.11.0-0.nightly-2022-04-06-213816]$ ./oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-04-06-213816 True False 137m Cluster version is 4.11.0-0.nightly-2022-04-06-213816 [knarra@knarra openshift-client-linux-4.11.0-0.nightly-2022-04-06-213816]$ ./oc get csv -n openshift-kube-descheduler-operator NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.11.0-202204062132 Kube Descheduler Operator 4.11.0-202204062132 Succeeded How reproducible: Always Steps to Reproduce: 1.Install latest 4.11 cluster 2.Install descheduler operator 3.Create KubeDescheduler instance Actual results: I see KubeDescheduler instance is seen in the UI but when checked from cli i do not see the same and on further checking the operator logs i see a panic in the logs. [knarra@knarra openshift-client-linux-4.11.0-0.nightly-2022-04-06-213816]$ ./oc get pods -n openshift-kube-descheduler-operator NAME READY STATUS RESTARTS AGE descheduler-operator-ffcf496bc-g4rss 1/1 Running 0 11m E0407 09:07:47.029627 1 runtime.go:78] Observed a panic: &fs.PathError{Op:"open", Path:"assets/kube-descheduler/operandserviceaccount.yaml", Err:(*errors.errorString)(0xc00007c040)} (open assets/kube-descheduler/operandserviceaccount.yaml: file does not exist) goroutine 345 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x20a47c0, 0xc0008b7410}) k8s.io/apimachinery.0/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x26bba90}) k8s.io/apimachinery.0/pkg/util/runtime/runtime.go:48 +0x75 panic({0x20a47c0, 0xc0008b7410}) runtime/panic.go:1038 +0x215 github.com/openshift/cluster-kube-descheduler-operator/bindata.MustAsset(...) github.com/openshift/cluster-kube-descheduler-operator/bindata/assets.go:20 github.com/openshift/cluster-kube-descheduler-operator/pkg/operator.(*TargetConfigReconciler).manageServiceAccount(0xc000729c98, 0xc000c32b40) github.com/openshift/cluster-kube-descheduler-operator/pkg/operator/target_config_reconciler.go:255 +0x1c7 github.com/openshift/cluster-kube-descheduler-operator/pkg/operator.TargetConfigReconciler.sync({{0x2717850, 0xc000614280}, {0xc00005c006, 0x75}, {0x26ed990, 0xc00093c6a0}, 0xc0009277d0, {0x2781fc8, 0xc00073ca80}, {0x26cc180, ...}, ...}) github.com/openshift/cluster-kube-descheduler-operator/pkg/operator/target_config_reconciler.go:155 +0x30f github.com/openshift/cluster-kube-descheduler-operator/pkg/operator.(*TargetConfigReconciler).processNextWorkItem(0xc000a781e0) github.com/openshift/cluster-kube-descheduler-operator/pkg/operator/target_config_reconciler.go:534 +0x138 github.com/openshift/cluster-kube-descheduler-operator/pkg/operator.(*TargetConfigReconciler).runWorker(...) github.com/openshift/cluster-kube-descheduler-operator/pkg/operator/target_config_reconciler.go:523 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d71f00) k8s.io/apimachinery.0/pkg/util/wait/wait.go:155 +0x67 k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052bae8, {0x26cbcc0, 0xc0009fc1b0}, 0x1, 0xc0009a38c0) k8s.io/apimachinery.0/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x19260, 0x3b9aca00, 0x0, 0xe8, 0x445705) k8s.io/apimachinery.0/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(0x9c2720, 0xc0004be010, 0xc0009497b8) k8s.io/apimachinery.0/pkg/util/wait/wait.go:90 +0x25 created by github.com/openshift/cluster-kube-descheduler-operator/pkg/operator.(*TargetConfigReconciler).Run github.com/openshift/cluster-kube-descheduler-operator/pkg/operator/target_config_reconciler.go:517 +0x225 I0407 09:07:47.029686 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"e1dc3943-acec-485c-ab91-c88cfec0991b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ServiceUpdated' Updated Service/metrics -n openshift-kube-descheduler-operator because it changed I0407 09:07:47.029703 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"e1dc3943-acec-485c-ab91-c88cfec0991b", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'Openshift-Cluster-Kube-Descheduler-OperatorPanic' Panic observed: open assets/kube-descheduler/operandserviceaccount.yaml: file does not exist E0407 09:07:48.588071 1 runtime.go:78] Observed a panic: &fs.PathError{Op:"open", Path:"assets/kube-descheduler/operandserviceaccount.yaml", Err:(*errors.errorString)(0xc00007c040)} (open assets/kube-descheduler/operandserviceaccount.yaml: file does not exist) Expected results: Should not see any panic and kubedescheduler instance should get created fine. Additional info:
Verified bug in the build below and i see that kube descheduler operator works fine. [knarra@knarra ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-04-16-163450 True False 175m Cluster version is 4.11.0-0.nightly-2022-04-16-163450 [knarra@knarra ~]$ oc get csv -n openshift-kube-descheduler-operator NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.11.0-202204081947 Kube Descheduler Operator 4.11.0-202204081947 Succeeded [knarra@knarra ~]$ oc get pods -n openshift-kube-descheduler-operator NAME READY STATUS RESTARTS AGE descheduler-6d8c6f589d-hd7qj 1/1 Running 0 2m14s descheduler-operator-69456cfb89-jkmrn 1/1 Running 0 2m47s Based on the above moving bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069