Bug 2013160 - Create an offline VM with storageClass HPP is always in 'Provisioning‘ status
Summary: Create an offline VM with storageClass HPP is always in 'Provisioning‘ status
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.10.0
Assignee: Zvi Cahana
QA Contact: zhe peng
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-12 09:18 UTC by Guohua Ouyang
Modified: 2022-03-16 15:56 UTC (History)
4 users (show)

Fixed In Version: virt-operator-container-v4.10.0-142 hco-bundle-registry-container-v4.10.0-479
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-16 15:56:11 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
VM yaml (6.40 KB, application/octet-stream)
2021-10-12 09:18 UTC, Guohua Ouyang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 6713 0 None Merged Add a WaitingForVolumeBinding printable status 2021-11-07 08:37:55 UTC
Red Hat Product Errata RHSA-2022:0947 0 None None None 2022-03-16 15:56:20 UTC

Description Guohua Ouyang 2021-10-12 09:18:21 UTC
Created attachment 1832136 [details]
VM yaml

Description of problem:
Create an offline VM with storageClass HPP is always in 'Provisioning‘ status, because the backend datavolume is in 'WaitForFirstConsumer' status.

“Provisioning“ means something is still progressing in the backend, but actually there is nothing running in the backend. Should we change the status? eg: change the status to "stopped" and the condition(present on UI) to "Pending"?

$ oc get vm
NAME                          AGE     STATUS         READY
rhel7-upper-leech             24h     Provisioning   False

In the VM yaml:
status -> printableStatus -> "Provisioning"



Version-Release number of selected component (if applicable):
CNV 4.9.0

How reproducible:
100%

Steps to Reproduce:
1. create a VM with HPP from UI(don't check `Start this virtual machine after creation` on review page).
2. check VM status either on UI or commandline
3.

Actual results:
VM status is always in 'Provisioning‘ status.

Expected results:
VM status is "Stopped".

Additional info:

Comment 1 sgott 2021-10-13 12:53:06 UTC
Guohua, can you confirm this only applies to WFFC and HPP? Can you double check that this doesn't affect other storage classes?

Comment 3 Zvi Cahana 2021-10-13 13:25:32 UTC
By reviewing the related code, I can confirm this behavior will apply to any WFFC-enabled storage class, and not just with HPP.
I agree it makes more sense to set the status to "Stopped" in that case. I'll work on a fix soon.

Comment 4 Guohua Ouyang 2021-10-13 13:32:11 UTC
(In reply to sgott from comment #1)
> Guohua, can you confirm this only applies to WFFC and HPP? Can you double
> check that this doesn't affect other storage classes?

It applys to other storage classes(OCS and NFS) as well. The VM status is provisioning before the image is imported, and it turns into "Stopped" after the image importing finished.

But for HPP, the importing is never happening if the VM is never started, so it's always in "provisioning" status.

Comment 5 Zvi Cahana 2021-10-14 09:34:15 UTC
Addressed in https://github.com/kubevirt/kubevirt/pull/6605

Comment 6 Zvi Cahana 2021-11-07 08:37:56 UTC
This was eventually addressed in https://github.com/kubevirt/kubevirt/pull/6713 (instead of in #6605 which was closed).

@Guohua Ouyang, would you be able to verify this from your end?

Comment 7 Guohua Ouyang 2021-11-08 07:10:54 UTC
(In reply to Zvi Cahana from comment #6)
> This was eventually addressed in
> https://github.com/kubevirt/kubevirt/pull/6713 (instead of in #6605 which
> was closed).
> 
> @Guohua Ouyang, would you be able to verify this from your end?

Sure, I can verify the bug once downstream build ready.

Comment 8 zhe peng 2022-01-05 06:23:40 UTC
verify with build: HCO:[v4.10.0-552] 

step: 
1. use attach yaml file to create vm with hpp
2. check vm status and dv
$ oc get vm
NAME                AGE   STATUS    READY
rhel7-upper-leech   3s    Stopped   False

$ oc get dv
NAME                PHASE                  PROGRESS   RESTARTS   AGE
rhel7-upper-leech   WaitForFirstConsumer   N/A                   14s

ocs: create vm with ocs
$ oc get vm
NAME                AGE   STATUS    READY
rhel7-upper-leech   90s   Stopped   False
vm-rhel             1s    Stopped   False

$ oc get dv
NAME                PHASE                  PROGRESS   RESTARTS   AGE
fedora-dv           ImportScheduled        N/A                   5s
rhel7-upper-leech   WaitForFirstConsumer   N/A                   94s

nfs:
$ oc get vm
NAME                AGE     STATUS         READY
rhel7-upper-leech   4m17s   Stopped        False
vm-nfs              3s      Provisioning   False
vm-rhel             2m48s   Provisioning   False

$ oc get dv
NAME                     PHASE                  PROGRESS   RESTARTS   AGE
fedora-dv                ImportInProgress       72.71%                3m23s
rhel7-upper-leech        WaitForFirstConsumer   N/A                   4m52s
vm-rhel-rootdisk-op0zj   ImportInProgress       14.17%                38s

after import finished
$ oc get dv
NAME                     PHASE                  PROGRESS   RESTARTS   AGE
fedora-dv                Succeeded              100.0%                6m46s
rhel7-upper-leech        WaitForFirstConsumer   N/A                   8m15s
vm-rhel-rootdisk-op0zj   Succeeded              100.0%                4m1s

check vm status:
$ oc get vm
NAME                AGE     STATUS    READY
rhel7-upper-leech   34m     Stopped   False
vm-nfs              5m55s   Stopped   False
vm-rhel             33m     Stopped   False

move to verified.

Comment 9 zhe peng 2022-01-05 09:19:48 UTC
Also check status in UI, get the same result with cmdline.

Comment 14 errata-xmlrpc 2022-03-16 15:56:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0947


Note You need to log in before you can comment on or make changes to this bug.