Bug 1965562

Summary: recycler-for-nfs-... does not set requests or priorityClassName
Product: OpenShift Container Platform Reporter: W. Trevor King <wking>
Component: StorageAssignee: Jonathan Dobson <jdobson>
Storage sub component: Kubernetes QA Contact: Wei Duan <wduan>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: unspecified CC: aos-bugs, jsafrane, mfojtik
Version: 4.8   
Target Milestone: ---   
Target Release: 4.9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: tag-ci
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-10-18 17:32:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description W. Trevor King 2021-05-28 05:18:24 UTC
Both cause sporadic CI failures, like:

$ curl -s 'https://search.ci.openshift.org/search?search=pods+found+with+invalid+priority+class&maxAge=96h&context=4&type=junit' | jq -r 'to_entries[].value | to_entries[].value[].context[]' | sed -n 's/\(.*\)-[a-z0-9]*\( (currently.*\)/\1...\2/p' | sort | uniq -c
      1 kube-system/openvpn-client-649bcb9d98... (currently "")
     12 openshift-infra/recycler-for-nfs... (currently "")

because [1] is not using one of the recommended priorityClassName values [2].  And, from [3]:

: [sig-arch] Managed cluster should set requests but not limits [Suite:openshift/conformance/parallel]	1s
fail [github.com/onsi/ginkgo.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: May 27 02:36:37.668: Pods in platform namespaces are not following resource request/limit rules or do not have an exception granted:
  v1/Pod/openshift-infra/recycler-for-nfs-hc8cz/container/recycler-container does not have a cpu request (rule: "v1/Pod/openshift-infra/recycler-for-nfs-hc8cz/container/recycler-container/request[cpu]")
  v1/Pod/openshift-infra/recycler-for-nfs-hc8cz/container/recycler-container does not have a memory request (rule: "v1/Pod/openshift-infra/recycler-for-nfs-hc8cz/container/recycler-container/request[memory]")

Setting requests will help the scheduler find a reasonable node to host you.  This seems unambiguously useful.

Setting openshift-user-critical for priorityClassName will slot you in above most user workloads.  Seems reasonable too, as long as you don't take up too many resources.  I also don't see a problem with either carving yourself out an exception in [4], or updating [4] to add some new really-low-priority class name, if either of those sound more appealing.

[1]: https://github.com/openshift/cluster-kube-controller-manager-operator/blob/b00edfda6e300fd4c611c610450afeaafd0c9c44/bindata/v4.1.0/kube-controller-manager/recycler-cm.yaml
[2]: https://github.com/openshift/enhancements/blob/2fc2b6ec5502713359bdc8801829b2f081c199f7/CONVENTIONS.md#priority-classes
[3]: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/26185/pull-ci-openshift-origin-master-e2e-metal-ipi-ovn-ipv6/1397721071909605376
[4]: https://github.com/openshift/origin/blob/8b440b30e4da6b52528aaaf865370192fd3e4718/test/extended/pods/priorityclasses.go

Comment 1 Maciej Szulik 2021-05-31 15:15:02 UTC
Sending over to storage team.

Comment 3 Wei Duan 2021-06-30 04:11:38 UTC
Verified pass with 4.9.0-0.nightly-2021-06-29-114024

Comment 6 errata-xmlrpc 2021-10-18 17:32:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.