Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1972393

Summary: PDB PUT /status is 1/6th of total write load on busy cluster continuously (should be 1/100 or so)
Product: OpenShift Container Platform Reporter: Hongkai Liu <hongkliu>
Component: kube-controller-managerAssignee: ravig <rgudimet>
Status: CLOSED ERRATA QA Contact: Hongkai Liu <hongkliu>
Severity: high Docs Contact:
Priority: medium    
Version: 4.8CC: aaleman, aos-bugs, ccoleman, maszulik, mfojtik, mifiedle, wking
Target Milestone: ---Flags: mfojtik: needinfo?
Target Release: 4.9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: LifecycleReset
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-10-18 17:34:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Hongkai Liu 2021-06-15 19:48:39 UTC
Description of problem:
In the audit log, it shows 276K of updates on PDBs by kube-system.
Not sure if it is expected.

Version-Release number of selected component (if applicable):
4.8.0-fc.9

How reproducible:


Steps to Reproduce:
1. We upload the audit log to AWS CloudWatch and the query is in the screenshot
2.
3.

Actual results:


Expected results:


Additional info:

Comment 4 Clayton Coleman 2021-07-01 14:20:26 UTC
High severity, this is 1/6 total request traffic and should be 1/100 (status doesn't change that much).  It could be:

1. controller itself is writing /status too often
2. a component is changing the PDB too frequently (bug in other component, but controller needs to rate limit how fast it reacts)
3. a component under the PDB is changing too frequently (bug in controller, it should rate limit how many status updates per second happen to a PDB)

Comment 5 Clayton Coleman 2021-07-01 14:23:16 UTC
oc get pdb -A -w is showing all the changes are to "-n ci prow-pods" (90%) and `-n ci-op-* ci-operator-created-by-ci` (10% across a number of namespaces, probably not the issue).  The first PDB (90%) is

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"policy/v1beta1","kind":"PodDisruptionBudget","metadata":{"annotations":{},"name":"prow-pods","namespace":"ci"},"spec":{"maxUnavailable":0,"selector":{"matchLabels":{"created-by-prow":"true"}}}}
  creationTimestamp: "2020-06-29T16:11:07Z"
  generation: 1
  name: prow-pods
  namespace: ci
  resourceVersion: "1136156217"
  uid: 878d4c49-d90b-490b-b343-7c627b00af89
spec:
  maxUnavailable: 0
  selector:
    matchLabels:
      created-by-prow: "true"
status:
  conditions:
  - lastTransitionTime: "2021-05-25T13:15:12Z"
    message: found no controller ref for pod "b66704cd-da6c-11eb-891b-0a580a831cfb"
    reason: SyncFailed
    status: "False"
    type: DisruptionAllowed
  currentHealthy: 0
  desiredHealthy: 0
  disruptionsAllowed: 0
  expectedPods: 0

Comment 6 Clayton Coleman 2021-07-01 14:24:39 UTC
Watching the PDB shows

...
status:
  conditions:
  - lastTransitionTime: "2021-05-25T13:15:12Z"
    message: found no controller ref for pod "f620c0bd-da5f-11eb-9234-0a580a8219f5"
    reason: SyncFailed
    status: "False"
    type: DisruptionAllowed
...
status:
  conditions:
  - lastTransitionTime: "2021-05-25T13:15:12Z"
    message: found no controller ref for pod "0cc18d94-da5e-11eb-a956-0a580a80161c"
    reason: SyncFailed
    status: "False"
    type: DisruptionAllowed

continuously changing - so the bug is the controller is not deduping / rate limiting how often it reports the last failed pod controller.

Comment 7 Clayton Coleman 2021-07-01 14:27:10 UTC
So the bug here is the selector selects multiple pods, some of those pods don't have a controller, the PDB controller is hot looping over each pod writing a different error over time.

Instead the controller should be accumulating the set of pods that are in error, summarizing it, and then explicitly requeue rate limiting (we must never hot loop on a singel PDB, even if things keep changing).

Comment 8 Clayton Coleman 2021-07-01 14:30:04 UTC
Not a release blocker, but could bring down etcd if you had lots of PDBs with broad selection.  Controllers have to have a max rate they write status to any one PDB, and that has to be roughly less than 1/N seconds where N grows as the number of PDBs grows (we want to have a base load of writes of O(1)/second and peak O(N+M) times a second (where N is number of user changes and M is number of pod changes).  If neither pods or PDB specs are changing, controllers should reach a rate of O(1)/second writes in steady state (i.e. no user changes = no writes)

Comment 9 Michal Fojtik 2021-07-31 14:47:09 UTC
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.

Comment 10 ravig 2021-08-17 13:08:52 UTC
https://github.com/kubernetes/kubernetes/pull/103414 merged in 1.22, the rebase should solve the problem

Comment 11 Michal Fojtik 2021-08-17 13:53:28 UTC
The LifecycleStale keyword was removed because the bug got commented on recently.
The bug assignee was notified.

Comment 12 Maciej Szulik 2021-08-19 11:55:17 UTC
This merged in https://github.com/openshift/kubernetes/pull/862

Comment 15 Mike Fiedler 2021-10-07 18:45:53 UTC
Moving to VERIFIED.  Please move to ASSIGNED if this is still an issue.

Comment 17 errata-xmlrpc 2021-10-18 17:34:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:3759