Description of problem: All Cinder-csi-driver pods does not define the resource requests for cpu/memory,and run as BestEffort qosClass. Also we have an e2e case failed: Managed cluster should ensure control plane pods do not run in best-effort QoS [Suite:openshift/conformance/parallel] under https://prow.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-4.7/1333229426166468608 Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2020-11-29-133728 Steps to Reproduce: 1. Install OSP cluster and cinder csi driver is installed. 2. Check CSI driver pods: oc -n openshift-cluster-csi-drivers get pod -o yaml Actual results: Cinder CSI driver node pods run as BestEffort qosClass oc get pod openstack-cinder-csi-driver-controller-68c89bf4c5-lmw74 openstack-cinder-csi-driver-node-42svt openstack-cinder-csi-driver-operator-77c787bb77-2qxg9 -o yaml | grep qosClass qosClass: BestEffort qosClass: BestEffort qosClass: BestEffort Expected results: We need to define the resource requests for cpu/memory for Cinder CSI driver related pods.
I think this is what is now perma-failing a CI job in the past week or two and caused a negative trend for our CI release metrics: https://testgrid.k8s.io/redhat-openshift-ocp-release-4.7-informing#release-openshift-ocp-installer-e2e-openstack-4.7 @Wei, could you confirm that please? here is the must-gather: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-4.7/1334497702402068480/artifacts/e2e-openstack/must-gather.tar and a grep on csi drivers namespace: ❯ pwd /tmp/quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-fbf66d3fd84d22c10949e9b6e4e8255399a24daa88f43626938c520cc799148d/namespaces/openshift-cluster-csi-drivers ❯ rg -i qos * | rg BestEffort pods/openstack-cinder-csi-driver-node-gqmzg/openstack-cinder-csi-driver-node-gqmzg.yaml: qosClass: BestEffort pods/openstack-cinder-csi-driver-controller-5bfc8fdf46-k9h6f/openstack-cinder-csi-driver-controller-5bfc8fdf46-k9h6f.yaml: qosClass: BestEffort pods/openstack-cinder-csi-driver-operator-7f968db7f-hkvmj/openstack-cinder-csi-driver-operator-7f968db7f-hkvmj.yaml: qosClass: BestEffort pods/openstack-cinder-csi-driver-node-nsffx/openstack-cinder-csi-driver-node-nsffx.yaml: qosClass: BestEffort pods/openstack-cinder-csi-driver-node-hn5pp/openstack-cinder-csi-driver-node-hn5pp.yaml: qosClass: BestEffort core/pods.yaml: qosClass: BestEffort core/pods.yaml: qosClass: BestEffort core/pods.yaml: qosClass: BestEffort core/pods.yaml: qosClass: BestEffort core/pods.yaml: qosClass: BestEffort
@luhrsen, yes I think we are talking the same thing here.
Verified pass on 4.7.0-0.nightly-2020-12-09-112139 oc -n openshift-cluster-csi-drivers get pod -o json | jq .items[].status.qosClass "Burstable" "Burstable" "Burstable" "Burstable" "Burstable" "Burstable"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633