Bug 1809409 - Failed to create volume from vm snapshot image
Summary: Failed to create volume from vm snapshot image
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 16.0 (Train)
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Pablo Caruana
QA Contact: Tzach Shefi
Chuck Copello
URL:
Whiteboard: libvirt_OSP_INT
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-03 05:09 UTC by chhu
Modified: 2020-04-27 02:52 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-24 07:56:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cinder-volume.log (39.36 KB, text/plain)
2020-03-03 05:09 UTC, chhu
no flags Details
guest.xml (5.75 KB, text/plain)
2020-04-21 12:13 UTC, chhu
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1673327 0 unspecified CLOSED Snap of instance booted from a volume, why allow (without alerting the user) if unsupported ? 2023-03-21 19:11:10 UTC

Description chhu 2020-03-03 05:09:57 UTC
Created attachment 1667114 [details]
cinder-volume.log

Description of problem:
Boot VM from volume, do VM snapshot(create image from the VM), then create volume from the the VM snapshot failed with error 

Version-Release number of selected component (if applicable):
kernel-core-4.18.0-177.el8.x86_64
libvirt-daemon-kvm-6.0.0-4.module+el8.2.0+5642+838f3513.x86_64
qemu-kvm-core-4.2.0-8.module+el8.2.0+5607+dc756904.x86_64
openstack-cinder-15.0.2-0.20200123220928.900f769.el8ost.noarch


How reproducible:
100%

Steps to Reproduce:
1. Setup OSP16.0 env base on nfs backend, update to RHEL-AV 8.2.0
2. Boot a VM from volume
(overcloud) [stack@dell-per730 ~]$ openstack server list
+--------------------------------------+--------------------------+---------+------------------------+-------+--------+
| ID                                   | Name                     | Status  | Networks               | Image | Flavor |
+--------------------------------------+--------------------------+---------+------------------------+-------+--------+
| cb8edfe4-6c09-4305-abc3-85830f1ee80e | vm-r8-qcow2-vol-hugepage | ACTIVE  | default=192.168.34.142 |       |        |
+--------------------------------------+--------------------------+---------+------------------------+-------+--------+
3. Create snapshot for the VM
(overcloud) [stack@dell-per730 ~]$ openstack server image create --name vm-r8-qcow2-vol-hugepage-s1 --wait vm-r8-qcow2-vol-hugepage
(overcloud) [stack@dell-per730 ~]$ openstack image list
+--------------------------------------+-----------------------------+--------+
| ID                                   | Name                        | Status |
+--------------------------------------+-----------------------------+--------+
| 316c5db4-88b6-4073-b7dd-9da8c1069b57 | vm-r8-qcow2-vol-hugepage-s1 | active |
+--------------------------------------+-----------------------------+--------+

4. Try to create volume from the image, hit error
(overcloud) [stack@dell-per730 ~]$ openstack volume create vm-r8-qcow2-vol-hugepage-s1-vol --size 10 --image vm-r8-qcow2-vol-hugepage-s1

(overcloud) [stack@dell-per730-44 ~]$ openstack volume list
+--------------------------------------+---------------------------------+-----------+------+---------------------------------------------------+
| ID                                   | Name                            | Status    | Size | Attached to                                       |
+--------------------------------------+---------------------------------+-----------+------+---------------------------------------------------+
| 200afac5-db5e-4c0a-a3e8-26e967e03116 | vm-r8-qcow2-vol-hugepage-s1-vol | error     |   10 |                                                   |

5. Check the cinder-volume log
---------------------------------------------------------------------------
2020-03-03 03:25:14.775 89 ERROR cinder.volume.volume_utils [req-ba0b11d2-eb38-4f6a-80a2-1e9af5fdb0e1 3586ec72e3e6457fad244cc73517cef6 140f7260dc444b60a8bd6bcbefca6fa2 - default default] Failed to copy image 316c5db4-88b6-4073-b7dd-9da8c1069b57 to volume: 200afac5-db5e-4c0a-a3e8-26e967e03116: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
Command: qemu-img convert -O raw -f qcow2 /var/lib/cinder/conversion/tmpnulgc3_7hostgroup@tripleo_nfs /var/lib/cinder/mnt/07b80119d40eda06c63650e0d74e0ba5/volume-200afac5-db5e-4c0a-a3e8-26e967e03116
Exit code: 1
Stdout: ''
Stderr: "qemu-img: Could not open '/var/lib/cinder/conversion/tmpnulgc3_7hostgroup@tripleo_nfs': Image is not in qcow2 format\n"
-----------------------------------------------------------------------------

Actual results:
In step4, failed to create volume from vm snapshot image 

Expected results:
In step4, create volume successfully

Additional info:
Bug 1798148 - Regression: Requested operation is not valid: format of backing image ... was not specified in the image metadata

Comment 1 Luigi Toscano 2020-03-03 10:49:27 UTC
I think that RHEL-AV 8.2.0 is not supported yet for OSP usage. That said, could you please check whether your scenario matches what described in the following bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1670834#c4

which also generated:
https://bugzilla.redhat.com/show_bug.cgi?id=1673327#c1

If it does

Comment 2 chhu 2020-03-03 12:12:23 UTC
Hi, Luigi

As boot VM from image is blocked by bug1802873, the VM is boot from volume, then do snapshot,

it match the scenario in https://bugzilla.redhat.com/show_bug.cgi?id=1673327#c1

Thank you!

Regards,
Chenli Hu

Comment 3 Luigi Toscano 2020-03-03 13:24:10 UTC
(In reply to chhu from comment #2)
> Hi, Luigi
> 
> As boot VM from image is blocked by bug1802873, the VM is boot from volume,
> then do snapshot,

If you are not testing specifically RHEL-AV 8.2 (which, again, it is not supported at this point), I would advise you to recheck with the shipped version of virtualization packages.
 
> it match the scenario in
> https://bugzilla.redhat.com/show_bug.cgi?id=1673327#c1

So do you see, in addition to the 0 size image, a new volume too which can be booted (as described there)? If it's the case, we can close this as NOTABUG.

Comment 4 chhu 2020-03-05 05:55:29 UTC
(In reply to Luigi Toscano from comment #3)
> (In reply to chhu from comment #2)
> > Hi, Luigi
> > 
> > As boot VM from image is blocked by bug1802873, the VM is boot from volume,
> > then do snapshot,
> 
> If you are not testing specifically RHEL-AV 8.2 (which, again, it is not
> supported at this point), I would advise you to recheck with the shipped
> version of virtualization packages.

Thank you! I'm testing for RHEL-AV 8.2, I'll have a try with OSP16.1 
when it's nova_libvirt, nova_compute images with RHEL-AV 8.2

>  
> > it match the scenario in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1673327#c1
> 
> So do you see, in addition to the 0 size image, a new volume too which can
> be booted (as described there)? If it's the case, we can close this as
> NOTABUG.

After image-create, the new volume is not created, checking by `openstack volume list`.

Comment 5 chhu 2020-03-09 09:43:39 UTC
> > > it match the scenario in
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1673327#c1
> > 
> > So do you see, in addition to the 0 size image, a new volume too which can
> > be booted (as described there)? If it's the case, we can close this as
> > NOTABUG.
> 
> After image-create, the new volume is not created, checking by `openstack
> volume list`.

After image-create, Check openstack volume snapshot list, there is a volume snapshot. 

In Description step4, openstack volume create from vm-snapshot image(vm boot from volume),
I think the command need check the image size and report error in command line or 
cinder-volume log if the size is 0. If you won't fix this, you can close this bug 
as NOTABUG, thank you!

Regards,
Chenli Hu

Comment 6 chhu 2020-04-21 11:54:38 UTC
Tested on packages:
openstack-nova-compute-20.1.2-0.20200413153449.28324e6.el8ost.noarch
libvirt-daemon-kvm-6.0.0-16.module+el8.2.0+6139+d66dece5.x86_64


Test steps on web console:
1. Boot a VM:vm-r8-qcow2 from image and selected not create volume, 
   login to the vm, touch file: test
2. Do snapshot: s1-vm-r8-qcow2 of the VM
3. List the VM snapshot in the image tab.
   See the detail of the vm snapshot: s1-vm-r8-qcow2, it's size is 0 bytes
4. Check the nova-compute.log:
-----------------------------------------------------------------------------------------------
2020-04-21 11:45:54.538 7 INFO nova.virt.libvirt.driver [-] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Instance spawned successfully.
2020-04-21 11:45:54.539 7 INFO nova.compute.manager [req-41bf89d4-4276-4d38-a4d2-b5e95b15ec45 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Took 6.04 seconds to spawn the instance on the hypervisor.
2020-04-21 11:45:54.644 7 INFO nova.compute.manager [req-71b2e55a-20e5-44b6-834b-57ffc4f4ee0e - - - - -] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] During sync_power_state the instance has a pending task (spawning). Skip.
2020-04-21 11:45:54.667 7 INFO nova.compute.manager [req-41bf89d4-4276-4d38-a4d2-b5e95b15ec45 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Took 6.82 seconds to build instance.
2020-04-21 11:45:56.601 7 WARNING nova.compute.manager [req-7a2c385e-085e-4ba4-86f2-ee93423f81e1 8e5ee28f43fd448d99d6277ea4803011 43dd708e4262427883e3d9912e20bee7 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Received unexpected event network-vif-plugged-701f3330-7e7f-41df-8afd-62e53c80d931 for instance with vm_state active and task_state None.
2020-04-21 11:48:13.143 7 INFO nova.compute.manager [req-c253e724-4b09-42b7-a7de-b2c09d553d7c 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] instance snapshotting
2020-04-21 11:48:13.216 7 INFO nova.virt.libvirt.driver [req-c253e724-4b09-42b7-a7de-b2c09d553d7c 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Beginning live snapshot process
2020-04-21 11:48:13.707 7 INFO nova.virt.libvirt.driver [req-c253e724-4b09-42b7-a7de-b2c09d553d7c 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Skipping quiescing instance: QEMU guest agent is not enabled.
2020-04-21 11:48:19.391 7 INFO nova.virt.libvirt.driver [req-c253e724-4b09-42b7-a7de-b2c09d553d7c 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Snapshot extracted, beginning image upload
2020-04-21 11:48:33.822 7 INFO nova.virt.libvirt.driver [req-c253e724-4b09-42b7-a7de-b2c09d553d7c 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Snapshot image upload complete
2020-04-21 11:48:33.822 7 INFO nova.compute.manager [req-c253e724-4b09-42b7-a7de-b2c09d553d7c 9132acbe5f2e497f84e9d277450d9ef0 881843549ac948d0b4bc0b90641d0411 - default default] [instance: 26405ad7-8b2f-46b1-940b-082a2ce8f12b] Took 20.67 seconds to snapshot the instance on the hypervisorg
------------------------------------------------------------------------------------------------

5. Check the libvirt log:
------------------------------------------------------------------------------------------------
2020-04-21 11:48:13.215+0000: 31752: error : qemuDomainBlockJobAbort:17852 : invalid argument: disk vda does not have an active block job
2020-04-21 11:48:13.515+0000: 31753: error : qemuDomainBlockJobAbort:17852 : invalid argument: disk vda does not have an active block job
2020-04-21 11:48:14.290+0000: 31754: error : qemuMonitorJSONCheckError:412 : internal error: unable to execute QEMU command 'blockdev-del': Cannot find node libvirt-4-format
2020-04-21 11:48:14.292+0000: 31754: error : qemuMonitorJSONCheckError:412 : internal error: unable to execute QEMU command 'blockdev-del': Cannot find node libvirt-4-storage
-------------------------------------------------------------------------------------------------------

Comment 8 chhu 2020-04-21 12:13:41 UTC
Created attachment 1680533 [details]
guest.xml


Note You need to log in before you can comment on or make changes to this bug.