Bug 2250883 - [Backport to 4.13] CSI pods and customer workloads both have 'priority=0' and race for resources
Summary: [Backport to 4.13] CSI pods and customer workloads both have 'priority=0' and...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.13.7
Assignee: Nitin Goyal
QA Contact: Aviad Polak
URL:
Whiteboard:
Depends On: 2232464 2250884
Blocks: 2244409
TreeView+ depends on / blocked
 
Reported: 2023-11-21 16:44 UTC by Eran Tamir
Modified: 2024-01-30 08:52 UTC (History)
11 users (show)

Fixed In Version: 4.13.7-1
Doc Type: Bug Fix
Doc Text:
Clone Of: 2232464
Environment:
Last Closed: 2024-01-29 08:22:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2148 0 None Merged set priority class for csi pods 2023-11-22 05:23:22 UTC
Github red-hat-storage ocs-operator pull 2332 0 None open Bug 2250883:[release-4.13] set priority class for csi pods 2023-12-19 12:31:06 UTC
Red Hat Product Errata RHBA-2024:0540 0 None None None 2024-01-29 08:22:20 UTC

Description Eran Tamir 2023-11-21 16:44:13 UTC
+++ This bug was initially created as a clone of Bug #2232464 +++

Description of problem (please be detailed as possible and provide log
snippests):
- CSI pods have priority 0 instead of using openshift priorityclasses (https://docs.openshift.com/container-platform/4.13/nodes/pods/nodes-pods-priority.html).  Customer workloads also have priority 0 by default, which leads to a race for scheduling and resources.  

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
- Customer have upcoming activities which this issue may occur again 


Is there any workaround available to the best of your knowledge?
1. The most practical way is by moving a pod that has at least 350 of MB of requests to a different node.

2. If the 1st does not help then contact us (Red Hat) with the node name which we can recycle the whole node with help from SRE.


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)? 3


Can this issue reproducible? Yes
This was triggered via a DR testing scenario.  So I think trying to quickly spin up full workloads on an empty cluster

Can this issue reproduce from the UI?
- Not sure


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Apply CSI pods and customer workloads at the same time
2. Repeat until race is hit where CSI pods cannot be scheduled.


Actual results:
CSI pods can't be scheduled when customer workloads get there first

Expected results:
CSI pods are scheduled before customer workloads

Additional info:
Per SRE - From OCP point of view, AWS EBS CSI driver on ROSA should have enough priority - linking 4.11 yamls:

https://github.com/openshift/aws-ebs-csi-driver-operator/blob/release-4.11/assets/node.yaml#L24

https://github.com/openshift/aws-ebs-csi-driver-operator/blob/release-4.11/assets/controller.yaml#L25

--- Additional comment from RHEL Program Management on 2023-08-17 03:14:10 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.14.0' to '?', and so is being proposed to be fixed at the ODF 4.14.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from Venky Shankar on 2023-08-17 04:24:50 UTC ---

This seems CSI related - moving component.

--- Additional comment from Madhu Rajanna on 2023-08-17 04:37:37 UTC ---

CSI Pods doesn't have the priority 0 rather it also has the priority class https://github.com/rook/rook/blob/master/deploy/examples/operator.yaml#L120-L124 in upstream this can also be set by ocs-operator
Moving it to ocs-operator as it creates the Rook configmap.

--- Additional comment from RHEL Program Management on 2023-08-17 08:17:41 UTC ---

This BZ is being approved for ODF 4.14.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.14.0

--- Additional comment from RHEL Program Management on 2023-08-17 08:17:41 UTC ---

Since this bug has been approved for ODF 4.14.0 release, through release flag 'odf-4.14.0+', the Target Release is being set to 'ODF 4.14.0

--- Additional comment from errata-xmlrpc on 2023-08-18 06:28:37 UTC ---

This bug has been added to advisory RHBA-2023:115514 by ceph-build service account (ceph-build.COM)

--- Additional comment from Aviad Polak on 2023-08-24 09:37:32 UTC ---

LGTM in build       full_version: 4.14.0-112

priority: 2000001000
priorityClassName: system-node-critical

--- Additional comment from errata-xmlrpc on 2023-11-08 17:54:16 UTC ---

Bug report changed to RELEASE_PENDING status by Errata System.
Advisory RHSA-2023:115514-11 has been changed to PUSH_READY status.
https://errata.devel.redhat.com/advisory/115514

--- Additional comment from errata-xmlrpc on 2023-11-08 18:54:15 UTC ---

Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832

Comment 10 errata-xmlrpc 2024-01-29 08:22:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.7 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:0540


Note You need to log in before you can comment on or make changes to this bug.