Bug 2232464 - CSI pods and customer workloads both have 'priority=0' and race for resources
Summary: CSI pods and customer workloads both have 'priority=0' and race for resources
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.14.0
Assignee: Nitin Goyal
QA Contact: Aviad Polak
URL:
Whiteboard:
Depends On:
Blocks: 2244409 2250883 2250884
TreeView+ depends on / blocked
 
Reported: 2023-08-17 03:14 UTC by rdomingo
Modified: 2024-01-17 05:53 UTC (History)
7 users (show)

Fixed In Version: 4.14.0-111
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2250883 2250884 (view as bug list)
Environment:
Last Closed: 2023-11-08 18:54:15 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2148 0 None Merged set priority class for csi pods 2023-08-22 09:01:47 UTC
Github red-hat-storage ocs-operator pull 2149 0 None Merged Bug 2232464:[release-4.14] set priority class for csi pods 2023-08-22 09:01:56 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:55:04 UTC

Description rdomingo 2023-08-17 03:14:01 UTC
Description of problem (please be detailed as possible and provide log
snippests):
- CSI pods have priority 0 instead of using openshift priorityclasses (https://docs.openshift.com/container-platform/4.13/nodes/pods/nodes-pods-priority.html).  Customer workloads also have priority 0 by default, which leads to a race for scheduling and resources.  

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
- Customer have upcoming activities which this issue may occur again 


Is there any workaround available to the best of your knowledge?
1. The most practical way is by moving a pod that has at least 350 of MB of requests to a different node.

2. If the 1st does not help then contact us (Red Hat) with the node name which we can recycle the whole node with help from SRE.


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)? 3


Can this issue reproducible? Yes
This was triggered via a DR testing scenario.  So I think trying to quickly spin up full workloads on an empty cluster

Can this issue reproduce from the UI?
- Not sure


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Apply CSI pods and customer workloads at the same time
2. Repeat until race is hit where CSI pods cannot be scheduled.


Actual results:
CSI pods can't be scheduled when customer workloads get there first

Expected results:
CSI pods are scheduled before customer workloads

Additional info:
Per SRE - From OCP point of view, AWS EBS CSI driver on ROSA should have enough priority - linking 4.11 yamls:

https://github.com/openshift/aws-ebs-csi-driver-operator/blob/release-4.11/assets/node.yaml#L24

https://github.com/openshift/aws-ebs-csi-driver-operator/blob/release-4.11/assets/controller.yaml#L25

Comment 9 errata-xmlrpc 2023-11-08 18:54:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.