Bug 1867608 - ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster
Summary: ds/machine-config-daemon takes 100+ minutes to rollout on 250 node cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.7.0
Assignee: Antonio Murdaca
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On:
Blocks: 1899535
TreeView+ depends on / blocked
 
Reported: 2020-08-10 13:13 UTC by Scott Dodson
Modified: 2023-12-15 18:47 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:15:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1992 0 None closed Bug 1867608: ds/machine-config-daemon: Set maxUnavailable 10% 2021-02-07 13:44:30 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:15:56 UTC

Description Scott Dodson 2020-08-10 13:13:31 UTC
Description of problem:
Because ds/machine-config-daemon defaults to rollingUpdate maxUnavialable 1 the rollout is entirely serialized and thus very slow on large clusters. We can speed the rollout of daemonsets which don't immediately affect availability by allowing the maxUnavailable to scale with cluster size.

A quick test on a 250 node cluster shows that the current behavior takes around 100 minutes where as with maxUnavailable 10% it takes under 10 minutes.

Version-Release number of selected component (if applicable):
4.4

How reproducible:
100%

Steps to Reproduce:
1. Install a cluster that's got 20 or more hosts
2. Perform an upgrade
3. Observe that only one pod is unavailable at once and the amount of time the upgrade takes. 

Actual results:
1 pod unavailable at a time, slow rollout

Expected results:
10% pods unavailable at most, faster / more parallel rollout

Additional info:

Comment 4 Michael Nguyen 2020-10-27 23:26:26 UTC
Verified on 4.7.0-0.nightly-2020-10-26-124513.


$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2020-10-26-124513   True        False         126m    Cluster version is 4.7.0-0.nightly-2020-10-26-124513

$ oc -n openshift-machine-config-operator get ds/machine-config-daemon -o yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
  creationTimestamp: "2020-10-27T18:13:47Z"
  generation: 1
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:deprecated.daemonset.template.generation: {}
      f:spec:
        f:revisionHistoryLimit: {}
        f:selector:
          f:matchLabels:
            .: {}
            f:k8s-app: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:k8s-app: {}
            f:name: {}
          f:spec:
            f:containers:
              k:{"name":"machine-config-daemon"}:
                .: {}
                f:args: {}
                f:command: {}
                f:env:
                  .: {}
                  k:{"name":"NODE_NAME"}:
                    .: {}
                    f:name: {}
                    f:valueFrom:
                      .: {}
                      f:fieldRef:
                        .: {}
                        f:apiVersion: {}
                        f:fieldPath: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:resources:
                  .: {}
                  f:requests:
                    .: {}
                    f:cpu: {}
                    f:memory: {}
                f:securityContext:
                  .: {}
                  f:privileged: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
                f:volumeMounts:
                  .: {}
                  k:{"mountPath":"/rootfs"}:
                    .: {}
                    f:mountPath: {}
                    f:name: {}
              k:{"name":"oauth-proxy"}:
                .: {}
                f:args: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":9001,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:hostPort: {}
                    f:name: {}
                    f:protocol: {}
                f:resources:
                  .: {}
                  f:requests:
                    .: {}
                    f:cpu: {}
                    f:memory: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
                f:volumeMounts:
                  .: {}
                  k:{"mountPath":"/etc/tls/cookie-secret"}:
                    .: {}
                    f:mountPath: {}
                    f:name: {}
                  k:{"mountPath":"/etc/tls/private"}:
                    .: {}
                    f:mountPath: {}
                    f:name: {}
            f:dnsPolicy: {}
            f:hostNetwork: {}
            f:hostPID: {}
            f:nodeSelector:
              .: {}
              f:kubernetes.io/os: {}
            f:priorityClassName: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:serviceAccount: {}
            f:serviceAccountName: {}
            f:terminationGracePeriodSeconds: {}
            f:tolerations: {}
            f:volumes:
              .: {}
              k:{"name":"cookie-secret"}:
                .: {}
                f:name: {}
                f:secret:
                  .: {}
                  f:defaultMode: {}
                  f:secretName: {}
              k:{"name":"proxy-tls"}:
                .: {}
                f:name: {}
                f:secret:
                  .: {}
                  f:defaultMode: {}
                  f:secretName: {}
              k:{"name":"rootfs"}:
                .: {}
                f:hostPath:
                  .: {}
                  f:path: {}
                  f:type: {}
                f:name: {}
        f:updateStrategy:
          f:rollingUpdate:
            .: {}
            f:maxUnavailable: {}
          f:type: {}
    manager: machine-config-operator
    operation: Update
    time: "2020-10-27T18:13:47Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:currentNumberScheduled: {}
        f:desiredNumberScheduled: {}
        f:numberAvailable: {}
        f:numberReady: {}
        f:observedGeneration: {}
        f:updatedNumberScheduled: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-10-27T18:22:53Z"
  name: machine-config-daemon
  namespace: openshift-machine-config-operator
  resourceVersion: "20769"
  selfLink: /apis/apps/v1/namespaces/openshift-machine-config-operator/daemonsets/machine-config-daemon
  uid: a2793d6b-dd9e-4211-a678-ed4aa1ba7820
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: machine-config-daemon
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: machine-config-daemon
      name: machine-config-daemon
    spec:
      containers:
      - args:
        - start
        command:
        - /usr/bin/machine-config-daemon
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eace99e813cad35dea2c6663b04b377e3a0c8f3ec2c759d1a268b26937bff47
        imagePullPolicy: IfNotPresent
        name: machine-config-daemon
        resources:
          requests:
            cpu: 20m
            memory: 50Mi
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: FallbackToLogsOnError
        volumeMounts:
        - mountPath: /rootfs
          name: rootfs
      - args:
        - --https-address=:9001
        - --provider=openshift
        - --openshift-service-account=machine-config-daemon
        - --upstream=http://127.0.0.1:8797
        - --tls-cert=/etc/tls/private/tls.crt
        - --tls-key=/etc/tls/private/tls.key
        - --cookie-secret-file=/etc/tls/cookie-secret/cookie-secret
        - '--openshift-sar={"resource": "namespaces", "verb": "get"}'
        - '--openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}'
        image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16a13d63927ac945990c930ef63789c2f5661962dfcfc2094de0774bc06a6435
        imagePullPolicy: IfNotPresent
        name: oauth-proxy
        ports:
        - containerPort: 9001
          hostPort: 9001
          name: metrics
          protocol: TCP
        resources:
          requests:
            cpu: 20m
            memory: 50Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/tls/private
          name: proxy-tls
        - mountPath: /etc/tls/cookie-secret
          name: cookie-secret
      dnsPolicy: ClusterFirst
      hostNetwork: true
      hostPID: true
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: machine-config-daemon
      serviceAccountName: machine-config-daemon
      terminationGracePeriodSeconds: 600
      tolerations:
      - operator: Exists
      volumes:
      - hostPath:
          path: /
          type: ""
        name: rootfs
      - name: proxy-tls
        secret:
          defaultMode: 420
          secretName: proxy-tls
      - name: cookie-secret
        secret:
          defaultMode: 420
          secretName: cookie-secret
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 10%
    type: RollingUpdate
status:
  currentNumberScheduled: 6
  desiredNumberScheduled: 6
  numberAvailable: 6
  numberMisscheduled: 0
  numberReady: 6
  observedGeneration: 1
  updatedNumberScheduled: 6

Comment 6 Yu Qi Zhang 2021-01-06 16:57:56 UTC
Should not need a doc update as this only modifies how fast the updated pods themselves (not the pools) are rolled out

Comment 8 errata-xmlrpc 2021-02-24 15:15:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.