Bug 1902601 - Cinder csi driver pods run as BestEffort qosClass
Summary: Cinder csi driver pods run as BestEffort qosClass
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.7
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.7.0
Assignee: Martin André
QA Contact: Wei Duan
Depends On:
TreeView+ depends on / blocked
Reported: 2020-11-30 07:10 UTC by Wei Duan
Modified: 2021-02-24 15:36 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2021-02-24 15:36:33 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift cluster-storage-operator pull 108 0 None closed Bug 1902601: Fix resources in cinder csi deployment template 2021-01-13 14:47:56 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:36:56 UTC

Description Wei Duan 2020-11-30 07:10:52 UTC
Description of problem:
All Cinder-csi-driver pods does not define the resource requests for cpu/memory,and run as BestEffort qosClass.
Also we have an e2e case failed: Managed cluster should ensure control plane pods do not run in best-effort QoS [Suite:openshift/conformance/parallel] under https://prow.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-openstack-4.7/1333229426166468608

Version-Release number of selected component (if applicable):

Steps to Reproduce:
1. Install OSP cluster and cinder csi driver is installed. 

2. Check CSI driver pods:
   oc -n openshift-cluster-csi-drivers get pod -o yaml

Actual results:
Cinder CSI driver node pods run as BestEffort qosClass
oc get pod openstack-cinder-csi-driver-controller-68c89bf4c5-lmw74 openstack-cinder-csi-driver-node-42svt openstack-cinder-csi-driver-operator-77c787bb77-2qxg9 -o yaml | grep qosClass
    qosClass: BestEffort
    qosClass: BestEffort
    qosClass: BestEffort

Expected results:
We need to define the resource requests for cpu/memory for Cinder CSI driver related pods.

Comment 1 jamo luhrsen 2020-12-03 21:48:37 UTC
I think this is what is now perma-failing a CI job in the past week or two and caused a negative trend for
our CI release metrics:


@Wei, could you confirm that please?

here is the must-gather:


and a grep on csi drivers namespace:

❯ pwd
❯ rg -i qos * | rg BestEffort
pods/openstack-cinder-csi-driver-node-gqmzg/openstack-cinder-csi-driver-node-gqmzg.yaml:  qosClass: BestEffort
pods/openstack-cinder-csi-driver-controller-5bfc8fdf46-k9h6f/openstack-cinder-csi-driver-controller-5bfc8fdf46-k9h6f.yaml:  qosClass: BestEffort
pods/openstack-cinder-csi-driver-operator-7f968db7f-hkvmj/openstack-cinder-csi-driver-operator-7f968db7f-hkvmj.yaml:  qosClass: BestEffort
pods/openstack-cinder-csi-driver-node-nsffx/openstack-cinder-csi-driver-node-nsffx.yaml:  qosClass: BestEffort
pods/openstack-cinder-csi-driver-node-hn5pp/openstack-cinder-csi-driver-node-hn5pp.yaml:  qosClass: BestEffort
core/pods.yaml:    qosClass: BestEffort
core/pods.yaml:    qosClass: BestEffort
core/pods.yaml:    qosClass: BestEffort
core/pods.yaml:    qosClass: BestEffort
core/pods.yaml:    qosClass: BestEffort

Comment 2 Wei Duan 2020-12-04 01:52:52 UTC
@luhrsen, yes I think we are talking the same thing here.

Comment 4 Wei Duan 2020-12-10 14:02:45 UTC
Verified pass on 4.7.0-0.nightly-2020-12-09-112139

oc -n openshift-cluster-csi-drivers get pod -o json | jq .items[].status.qosClass

Comment 8 errata-xmlrpc 2021-02-24 15:36:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.