Created attachment 1832136 [details] VM yaml Description of problem: Create an offline VM with storageClass HPP is always in 'Provisioning‘ status, because the backend datavolume is in 'WaitForFirstConsumer' status. “Provisioning“ means something is still progressing in the backend, but actually there is nothing running in the backend. Should we change the status? eg: change the status to "stopped" and the condition(present on UI) to "Pending"? $ oc get vm NAME AGE STATUS READY rhel7-upper-leech 24h Provisioning False In the VM yaml: status -> printableStatus -> "Provisioning" Version-Release number of selected component (if applicable): CNV 4.9.0 How reproducible: 100% Steps to Reproduce: 1. create a VM with HPP from UI(don't check `Start this virtual machine after creation` on review page). 2. check VM status either on UI or commandline 3. Actual results: VM status is always in 'Provisioning‘ status. Expected results: VM status is "Stopped". Additional info:
Guohua, can you confirm this only applies to WFFC and HPP? Can you double check that this doesn't affect other storage classes?
By reviewing the related code, I can confirm this behavior will apply to any WFFC-enabled storage class, and not just with HPP. I agree it makes more sense to set the status to "Stopped" in that case. I'll work on a fix soon.
(In reply to sgott from comment #1) > Guohua, can you confirm this only applies to WFFC and HPP? Can you double > check that this doesn't affect other storage classes? It applys to other storage classes(OCS and NFS) as well. The VM status is provisioning before the image is imported, and it turns into "Stopped" after the image importing finished. But for HPP, the importing is never happening if the VM is never started, so it's always in "provisioning" status.
Addressed in https://github.com/kubevirt/kubevirt/pull/6605
This was eventually addressed in https://github.com/kubevirt/kubevirt/pull/6713 (instead of in #6605 which was closed). @Guohua Ouyang, would you be able to verify this from your end?
(In reply to Zvi Cahana from comment #6) > This was eventually addressed in > https://github.com/kubevirt/kubevirt/pull/6713 (instead of in #6605 which > was closed). > > @Guohua Ouyang, would you be able to verify this from your end? Sure, I can verify the bug once downstream build ready.
verify with build: HCO:[v4.10.0-552] step: 1. use attach yaml file to create vm with hpp 2. check vm status and dv $ oc get vm NAME AGE STATUS READY rhel7-upper-leech 3s Stopped False $ oc get dv NAME PHASE PROGRESS RESTARTS AGE rhel7-upper-leech WaitForFirstConsumer N/A 14s ocs: create vm with ocs $ oc get vm NAME AGE STATUS READY rhel7-upper-leech 90s Stopped False vm-rhel 1s Stopped False $ oc get dv NAME PHASE PROGRESS RESTARTS AGE fedora-dv ImportScheduled N/A 5s rhel7-upper-leech WaitForFirstConsumer N/A 94s nfs: $ oc get vm NAME AGE STATUS READY rhel7-upper-leech 4m17s Stopped False vm-nfs 3s Provisioning False vm-rhel 2m48s Provisioning False $ oc get dv NAME PHASE PROGRESS RESTARTS AGE fedora-dv ImportInProgress 72.71% 3m23s rhel7-upper-leech WaitForFirstConsumer N/A 4m52s vm-rhel-rootdisk-op0zj ImportInProgress 14.17% 38s after import finished $ oc get dv NAME PHASE PROGRESS RESTARTS AGE fedora-dv Succeeded 100.0% 6m46s rhel7-upper-leech WaitForFirstConsumer N/A 8m15s vm-rhel-rootdisk-op0zj Succeeded 100.0% 4m1s check vm status: $ oc get vm NAME AGE STATUS READY rhel7-upper-leech 34m Stopped False vm-nfs 5m55s Stopped False vm-rhel 33m Stopped False move to verified.
Also check status in UI, get the same result with cmdline.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0947