Bug 1955129
| Summary: | Failed to bindmount hotplug-disk for hostpath-provisioner | ||
|---|---|---|---|
| Product: | Container Native Virtualization (CNV) | Reporter: | Yan Du <yadu> |
| Component: | Storage | Assignee: | Alexander Wels <awels> |
| Status: | CLOSED ERRATA | QA Contact: | Yan Du <yadu> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.8.0 | CC: | alitke, awels, cnv-qe-bugs, mrashish |
| Target Milestone: | --- | ||
| Target Release: | 4.9.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | CNV v4.9.0-180, virt-handler v4.9.0-41 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-11-02 15:57:28 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Unable to find a solution and since this is not a blocker I am pushing out to 4.8.1. For the single pod for all hotplug disks card, I have rewritten some of the logic needed to mount hostpath disks. I have successfully attached volumes in an RHCOS based cluster that is 3 masters and 3 workers. So this bug will get fixed when that is merged, but we should probably push this out to 4.9 to get fixed. Test on CNV-v4.9.0-125
Issue still can be reproduced.
volumeStatus:
- hotplugVolume:
attachPodName: hp-volume-xczqb
attachPodUID: b782decf-0a6f-4e6b-a1ae-5d195a89598d
message: Created hotplug attachment pod hp-volume-xczqb, for volume blank-dv
name: blank-dv
phase: AttachedToNode
reason: SuccessfulCreate
The reproducer is because the path that the hostpath is using is on a different device. The way findmnt reports the path is relative to that device, and not the root partition this is what is causing the failure. I have solid reproduction steps in https://github.com/kubevirt/kubevirt/issues/6303 and will work on a fix. Test on CNV-v4.9.0-194, issue have been fixed.
volumeStatus:
- hotplugVolume:
attachPodName: hp-volume-6tjgq
attachPodUID: 9edf0436-ca59-4206-afd8-af140fac691a
message: Successfully attach hotplugged volume blank-dv to VM
name: blank-dv
phase: Ready
reason: VolumeReady
target: sda
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.9.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:4104 |
Description of problem: Failed to bindmount hotplug-disk for hostpath-provisioner Version-Release number of selected component (if applicable): CNV4.8 virtctl-4.8.0-200.el7.x86_64.rpm How reproducible: Always Steps to Reproduce: 1. Create a VM --- apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: fedora name: fedora-1619697674-6212 spec: template: metadata: labels: kubevirt.io/vm: fedora-1619697674-6212 kubevirt.io/domain: fedora-1619697674-6212 spec: domain: cpu: cores: 1 devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} machine: type: '' resources: requests: memory: 1024Mi networks: - name: default pod: {} terminationGracePeriodSeconds: 30 volumes: - containerDisk: image: quay.io/openshift-cnv/qe-cnv-tests-fedora:33 name: containerdisk - name: cloudinitdisk cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } running: true 2. Create a blank dv --- apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-dv spec: source: blank: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: hostpath-provisioner volumeMode: Filesystem 3. Hotplug the volume to the VMI # virtctl addvolume fedora-1619697674-6212 --volume-name=blank-dv Actual results: $ oc get vmi -o yaml --------------8<--------------- volumeStatus: - hotplugVolume: attachPodName: hp-volume-q8chb attachPodUID: ce44311f-a4a2-4c49-90b7-66a83699633e message: Created hotplug attachment pod hp-volume-q8chb, for volume blank-dv name: blank-dv phase: AttachedToNode reason: SuccessfulCreate target: "" --------------8<--------------- Expected results: hotplug volume works well for hostpath-provisioner Additional info: Log from virt-handler pod: {"component":"virt-handler","kind":"","level":"error","msg":"Synchronizing the VirtualMachineInstance failed.","name":"fedora-1619697674-6212","namespace":"test-hhhh","pos":"vm.go:1549","reason":"failed to bindmount hotplug-disk blank-dv: Error: no such file or directory\nUsage:\n virt-chroot mount [flags]\n\nFlags:\n -h, --help help for mount\n -o, --options string comma separated list of mount options\n -t, --type string fstype\n\nGlobal Flags:\n --cpu uint32 cpu time in seconds for the process\n --memory uint32 memory in megabyte for the process\n --mount string mount namespace to use\n --user string switch to this targetUser to e.g. drop privileges\n\nno such file or directory\n : exit status 1","timestamp":"2021-04-29T13:03:26.391004Z","uid":"1c19f8b7-0f18-48ff-809b-bb2487cc3fc2"} {"component":"virt-handler","level":"info","msg":"re-enqueuing VirtualMachineInstance test-hhhh/fedora-1619697674-6212","pos":"vm.go:1206","reason":"failed to bindmount hotplug-disk blank-dv: Error: no such file or directory\nUsage:\n virt-chroot mount [flags]\n\nFlags:\n -h, --help help for mount\n -o, --options string comma separated list of mount options\n -t, --type string fstype\n\nGlobal Flags:\n --cpu uint32 cpu time in seconds for the process\n --memory uint32 memory in megabyte for the process\n --mount string mount namespace to use\n --user string switch to this targetUser to e.g. drop privileges\n\nno such file or directory\n : exit status 1","timestamp":"2021-04-29T13:03:26.400897Z"}