Bug 1884138
| Summary: | Update webhook always fails updating the spec.workloads field | ||
|---|---|---|---|
| Product: | Container Native Virtualization (CNV) | Reporter: | Nahshon Unna-Tsameret <nunnatsa> |
| Component: | Installation | Assignee: | Simone Tiraboschi <stirabos> |
| Status: | CLOSED ERRATA | QA Contact: | Kedar Bidarkar <kbidarka> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 2.5.0 | CC: | cnv-qe-bugs, kbidarka, ncredi, stirabos |
| Target Milestone: | --- | ||
| Target Release: | 2.5.0 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | hco-bundle-registry-container-v2.5.0-280 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-11-17 13:24:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Verified this by following the below steps on a fresh setup
Taint a node.
[kbidarka@localhost cnv-tests]$ oc adm taint node kbid25ve-m2n85-worker-0-95qhh worker=load-balancer:NoSchedule
node/kbid25ve-m2n85-worker-0-95qhh tainted
Ensure Taint got applied on that node.
[kbidarka@localhost cnv-tests]$ oc get nodes kbid25ve-m2n85-worker-0-95qhh -o yaml | grep -A 3 taints
--
taints:
- effect: NoSchedule
key: worker
value: load-balancer
Update the workloads section under the hyperconverged CR.
[kbidarka@localhost cnv-tests]$ oc edit hyperconverged -n openshift-cnv
hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged edited
Ensure the hyperconverged CR got updated successfully.
[kbidarka@localhost cnv-tests]$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml | grep -A 5 workloads
--
workloads:
nodePlacement:
tolerations:
- effect: NoSchedule
key: worker
operator: Exists
Ensure the change got propagated to the daemonset virt-handler, as an example of workloads.
[kbidarka@localhost cnv-tests]$ oc get daemonset virt-handler -n openshift-cnv -o yaml | grep -A 6 "tolerations:"
--
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: worker
operator: Exists
Check that the pods got re-created.
[kbidarka@localhost cnv-tests]$ oc get pods -n openshift-cnv | grep -i virt-handler
virt-handler-6v6pl 1/1 Running 0 7m48s
virt-handler-r75wl 1/1 Running 0 6m55s
virt-handler-w6cq7 1/1 Running 0 7m17s
Ensure the virt-handler pod running on the taint node also got created, with the Tolerations.
[kbidarka@localhost cnv-tests]$ oc get pods virt-handler-w6cq7 -n openshift-cnv -o yaml | grep nodeName
fieldPath: spec.nodeName
nodeName: kbid25ve-m2n85-worker-0-95qhh
Pod got re-created with the Tolerations.
[kbidarka@localhost cnv-tests]$ oc get pods virt-handler-w6cq7 -n openshift-cnv -o yaml | grep -A 6 "tolerations:"
--
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: worker
operator: Exists
- effect: NoExecute
Was able to successfully update the HCO CR with the Toleration under workloads.
The change does get propagated successfully.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Virtualization 2.5.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:5127 |
Description of problem: When trying to update the `spec.workloads` field in the HyperConverged CR, the webhook always fails the request. Version-Release number of selected component (if applicable): 2.5 How reproducible: Create a HCO CR Then patch the workloads object, e.g. ... spec: ... workloads: nodePlacement: nodeSelector: nodeType: workloads ... Actual results: Update rejected by the webhook, even with no VM or DV. Expected results: The CR should be updated with the new values Additional info: