Description of problem: cluster log reports error "failed to validate server configuration" err="unsupported log format:" Version-Release number of selected component (if applicable): [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-11-30-172451]$ ./oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202011261728.p0 Kube Descheduler Operator 4.7.0-202011261728.p0 Succeeded How reproducible: Always Steps to Reproduce: 1. Install latest 4.7 cluster 2. Install kubedescheduler operator 3. Create kubedescheduler object 4. Run oc logs -f <cluster_pod> Actual results: cluster pod logs report below error [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-11-30-172451]$ ./oc logs -f cluster-7fdcffcd8b-nscjf E1201 13:45:16.817571 1 server.go:50] "failed to validate server configuration" err="unsupported log format: " Expected results: cluster pod log should not report any error as above. Additional info:
Did not find the changes in latest csv clusterkubedescheduleroperator.4.7.0-202012031911.p0, will wait for next operator respin before moving to assigned
Moving the bug back to assigned as i still see the same error in the latest builds. [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-04-013308]$ ./oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202012050255.p0 Kube Descheduler Operator 4.7.0-202012050255.p0 Succeeded [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-04-013308]$ ./oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2020-12-04-013308 True False 13h Cluster version is 4.7.0-0.nightly-2020-12-04-013308 registry.redhat.io/openshift4/ose-descheduler@sha256:1da501059d77a6fa72e6d10b0b1a7a0cc50f2abdffa07daef742b77c889964ea registry.redhat.io/openshift4/ose-cluster-kube-descheduler-operator@sha256:3585a22428dd6fb2cd3b363667b134e1374dd250a6bc381ff665003e9a303381 [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-04-013308]$ ./oc logs -f cluster-847dc7fdb6-7pmxz -n openshift-kube-descheduler-operator E1207 16:32:48.493237 1 server.go:50] "failed to validate server configuration" err="unsupported log format: " I1207 16:32:48.699381 1 node.go:46] "Node lister returned empty list, now fetch directly"
Verified bug in the payload below and i see that there is no error related to log in the cluster logs. [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-09-112139]$ ./oc get csv -n openshift-kube-descheduler-operator NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202012082225.p0 Kube Descheduler Operator 4.7.0-202012082225.p0 Succeeded [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-09-112139]$ ./oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2020-12-09-112139 True False 8h Cluster version is 4.7.0-0.nightly-2020-12-09-112139 [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-12-09-112139]$ ./oc logs -f cluster-cc57c67cb-rz59s -n openshift-kube-descheduler-operator I1210 14:00:00.344308 1 node.go:46] "Node lister returned empty list, now fetch directly" I1210 14:00:00.529255 1 topologyspreadconstraint.go:109] "Processing namespaces for topology spread constraints" I1210 14:00:00.559853 1 duplicates.go:83] "Processing node" node="ip-10-0-155-208.us-east-2.compute.internal" I1210 14:00:00.645307 1 duplicates.go:83] "Processing node" node="ip-10-0-158-78.us-east-2.compute.internal" I1210 14:00:00.663907 1 duplicates.go:83] "Processing node" node="ip-10-0-174-144.us-east-2.compute.internal" I1210 14:00:00.683086 1 duplicates.go:83] "Processing node" node="ip-10-0-183-124.us-east-2.compute.internal" I1210 14:00:00.764424 1 duplicates.go:83] "Processing node" node="ip-10-0-207-144.us-east-2.compute.internal" I1210 14:00:00.988455 1 duplicates.go:83] "Processing node" node="ip-10-0-222-186.us-east-2.compute.internal" I1210 15:00:01.163581 1 node.go:46] "Node lister returned empty list, now fetch directly" I1210 15:00:01.188512 1 topologyspreadconstraint.go:109] "Processing namespaces for topology spread constraints" I1210 15:00:01.216486 1 duplicates.go:83] "Processing node" node="ip-10-0-155-208.us-east-2.compute.internal" I1210 15:00:01.236649 1 duplicates.go:83] "Processing node" node="ip-10-0-158-78.us-east-2.compute.internal" I1210 15:00:01.255570 1 duplicates.go:83] "Processing node" node="ip-10-0-174-144.us-east-2.compute.internal" I1210 15:00:01.383674 1 duplicates.go:83] "Processing node" node="ip-10-0-183-124.us-east-2.compute.internal" I1210 15:00:01.583098 1 duplicates.go:83] "Processing node" node="ip-10-0-207-144.us-east-2.compute.internal" I1210 15:00:01.782810 1 duplicates.go:83] "Processing node" node="ip-10-0-222-186.us-east-2.compute.internal" Based on the above moving bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633