Bug 1989687 - daemonset openshift-kube-proxy/openshift-kube-proxy to have maxUnavailable 10% or 33%
Summary: daemonset openshift-kube-proxy/openshift-kube-proxy to have maxUnavailable 10...
Keywords:
Status: CLOSED DUPLICATE of bug 2029590
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.8
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: ---
Assignee: Ben Pickard
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-03 17:56 UTC by Richard Theis
Modified: 2021-12-06 20:27 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-06 20:27:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Richard Theis 2021-08-03 17:56:25 UTC
Description of problem:

OCP conformance test "[sig-arch] Managed cluster should only include cluster daemonsets that have maxUnavailable update of 10 or 33 percent [Suite:openshift/conformance/parallel]" is failing on Red Hat OpenShift on IBM Cloud (ROKS).

Version-Release number of selected component (if applicable):
4.8.2

How reproducible:
Always

Steps to Reproduce:
1. Run OCP conformance test on a ROKS version 4.8.2 cluster.

Actual results:

fail [github.com/onsi/ginkgo.0-origin.0+incompatible/internal/leafnodes/runner.go:113]: Aug  2 16:49:59.814: Daemonsets found that do not meet platform requirements for update strategy:
  expected daemonset openshift-kube-proxy/openshift-kube-proxy to have maxUnavailable 10% or 33% (see comment) instead of 1


Expected results:

Pass

Additional info:

ROKS version 4.8.2 is in development at the moment and not publicly available.

Comment 1 Richard Theis 2021-08-03 17:58:28 UTC
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "2"
    kubernetes.io/description: |
      This daemonset is the kubernetes service proxy (kube-proxy).
    release.openshift.io/version: 4.8.2
  creationTimestamp: "2021-08-03T12:32:39Z"
  generation: 2
  name: openshift-kube-proxy
  namespace: openshift-kube-proxy
  ownerReferences:
  - apiVersion: operator.openshift.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Network
    name: cluster
    uid: ef68884d-6ff3-45c7-a23e-d1e791c30882
  resourceVersion: "91160"
  uid: 74949715-7a00-49e7-beda-8e52e3bb1a57
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kube-proxy
  template:
    metadata:
      annotations:
        target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}'
      creationTimestamp: null
      labels:
        app: kube-proxy
        component: network
        openshift.io/component: network
        type: infra
    spec:
      containers:

...

  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberAvailable: 3
  numberMisscheduled: 0
  numberReady: 3
  observedGeneration: 2
  updatedNumberScheduled: 3

Comment 2 Richard Theis 2021-09-13 21:16:51 UTC
Hi, are there any updates?

Comment 3 Ben Pickard 2021-09-23 20:03:00 UTC
Bug got lost in the stack. This bug pops up frequently and I should be able to take care of this with a backport @rtheis.com

Comment 4 Richard Theis 2021-09-23 21:26:43 UTC
Thank you.

Comment 7 Ben Pickard 2021-12-06 20:27:38 UTC

*** This bug has been marked as a duplicate of bug 2029590 ***


Note You need to log in before you can comment on or make changes to this bug.