Description of problem: Failed to start VM after adding existing pvc to the stopped VM Version-Release number of selected component (if applicable): CNV4.10.0 How reproducible: Always Steps to Reproduce: 1. Login as non-privileged user in UI 2. Create a PVC in Storage->PersistentVolumeClaims 3. Create a VM in Virtualization->Virtual Machines 4. Stop the VM 5. Add disk to the VM in VM page ->Disk->Add disk-> Use an existing disk -> choose the pvc created in step2 6. Click Add button 7. Start the VM Actual results: VM failed to start Expected results: VM is started Additional info: $ oc logs virt-launcher-fedora-vast-gorilla-dhxhk -n d1 {"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:76","timestamp":"2022-03-22T14:18:27.595941Z"} {"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:79","timestamp":"2022-03-22T14:18:27.596021Z"} {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu:///system","pos":"libvirt.go:495","timestamp":"2022-03-22T14:18:27.602417Z"} {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon failed: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory')","pos":"libvirt.go:503","timestamp":"2022-03-22T14:18:27.603453Z"} {"component":"virt-launcher","level":"info","msg":"libvirt version: 7.0.0, package: 14.6.module+el8.4.0+13801+378af433 (Red Hat, Inc. \u003chttp://bugzilla.redhat.com/bugzilla\u003e, 2022-01-10-10:42:14, )","subcomponent":"libvirt","thread":"43","timestamp":"2022-03-22T14:18:27.769000Z"} {"component":"virt-launcher","level":"info","msg":"hostname: fedora-vast-gorilla","subcomponent":"libvirt","thread":"43","timestamp":"2022-03-22T14:18:27.769000Z"} {"component":"virt-launcher","level":"error","msg":"internal error: Child process (dmidecode -q -t 0,1,2,3,4,11,17) unexpected exit status 1: /dev/mem: No such file or directory","pos":"virCommandWait:2771","subcomponent":"libvirt","thread":"43","timestamp":"2022-03-22T14:18:27.769000Z"} {"component":"virt-launcher","level":"info","msg":"Connected to libvirt daemon","pos":"libvirt.go:511","timestamp":"2022-03-22T14:18:28.105409Z"} {"component":"virt-launcher","level":"info","msg":"Registered libvirt event notify callback","pos":"client.go:512","timestamp":"2022-03-22T14:18:28.130855Z"} {"component":"virt-launcher","level":"info","msg":"Marked as ready","pos":"virt-launcher.go:80","timestamp":"2022-03-22T14:18:28.131478Z"}
Do you happen to have the VM yaml that is generated?
message: >- preparing host-disks failed: unable to create /var/run/kubevirt-private/vmi-disks/disk-0/disk.img, not enough space, demanded size 117037858816 B is bigger than available space 78993514496 B, also after taking 10 % toleration into account reason: Synchronizing with the Domain failed. status: 'False' type: Synchronized Please refer to the attachment for the whole VM yaml
Created attachment 1869602 [details] vm yaml
Alexander, do you have a PR for this bug? if yes, could you please attach it?
No PR for this bug, it appears that you ran out of actual space on the storage (looks like it was hpp-csi). In which case this did exactly what it was supposed to do.
Hi, Alexander It may relate to the #bug 2066782, and only non-privileged user has this issue
So if you immediately do the same thing with a privileged user it works? This might be related to the autoresize feature in KubeVirt as the error appears to be coming from that code. And it checks if the actual available space is large enough for the virtual disk, and it thinks it is not. That is what the error is saying. Can you confirm you have enough space on the actual storage?
I checked the cluster again, it is lack of space for storage, so close the bug since it is expected