Bug 1996070
| Summary: | openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10% | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | OpenShift BugZilla Robot <openshift-bugzilla-robot> |
| Component: | Storage | Assignee: | Jan Safranek <jsafrane> |
| Storage sub component: | Operators | QA Contact: | Wei Duan <wduan> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | unspecified | CC: | aos-bugs, bbennett, jsafrane, zzhao |
| Version: | 4.8 | ||
| Target Milestone: | --- | ||
| Target Release: | 4.7.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-09-15 09:16:49 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1933184 | ||
| Bug Blocks: | |||
Also checked on manila/cinder/gcp/ovirt CSI Driver, the "maxUnavailable" is set to "10%". Verified nightly: 4.9.0-0.nightly-2021-09-05-204238 So change the status to "Verified". Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7.30 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3422 |
Checked on AWS EBS CSI Driver: 1. maxUnavailable is set to 10% $ oc get daemonset.apps/aws-ebs-csi-driver-node -o json | jq .spec.updateStrategy { "rollingUpdate": { "maxSurge": 0, "maxUnavailable": "10%" }, "type": "RollingUpdate" } 2. Test on 3 masters + 10 workers cluster, there are 2 pods created by ds are updated at the same time: $ oc get pod -w NAME READY STATUS RESTARTS AGE aws-ebs-csi-driver-controller-755d96d484-45qq2 11/11 Running 0 3h13m aws-ebs-csi-driver-controller-755d96d484-rsjd8 11/11 Running 4 (3h8m ago) 3h13m aws-ebs-csi-driver-node-2vxhw 3/3 Running 0 3h13m aws-ebs-csi-driver-node-4rlqr 3/3 Running 0 3h5m aws-ebs-csi-driver-node-5pqlp 3/3 Running 0 3h5m aws-ebs-csi-driver-node-6bqbq 3/3 Running 0 179m aws-ebs-csi-driver-node-8cd2g 3/3 Running 0 3h5m aws-ebs-csi-driver-node-9jlq2 3/3 Running 0 179m aws-ebs-csi-driver-node-9sr8w 3/3 Running 0 3h5m aws-ebs-csi-driver-node-gn55j 3/3 Running 0 3h5m aws-ebs-csi-driver-node-shfxc 3/3 Running 0 179m aws-ebs-csi-driver-node-tbxst 3/3 Running 0 179m aws-ebs-csi-driver-node-wvc4w 3/3 Running 0 3h13m aws-ebs-csi-driver-node-z75jc 3/3 Running 0 3h5m aws-ebs-csi-driver-node-z7vh8 3/3 Running 0 3h13m aws-ebs-csi-driver-operator-7dc8d6f89d-hqrg6 1/1 Running 3 (3h5m ago) 3h13m aws-ebs-csi-driver-node-5pqlp 3/3 Terminating 0 3h6m aws-ebs-csi-driver-node-4rlqr 3/3 Terminating 0 3h6m