Bug 1145582
| Summary: | Failed to import sparse qcow2 disk image after converting to rhevm | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | zhoujunqin <juzhou> | ||||||||||||||||
| Component: | libguestfs | Assignee: | Richard W.M. Jones <rjones> | ||||||||||||||||
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||||||||||||
| Severity: | medium | Docs Contact: | |||||||||||||||||
| Priority: | medium | ||||||||||||||||||
| Version: | 7.1 | CC: | codong, dyuan, juzhou, mbooth, mzhan, ptoscano, tzheng | ||||||||||||||||
| Target Milestone: | rc | ||||||||||||||||||
| Target Release: | --- | ||||||||||||||||||
| Hardware: | Unspecified | ||||||||||||||||||
| OS: | Unspecified | ||||||||||||||||||
| Whiteboard: | V2V | ||||||||||||||||||
| Fixed In Version: | libguestfs-1.27.56-1.1.el7 | Doc Type: | Bug Fix | ||||||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||||||
| Clone Of: | Environment: | ||||||||||||||||||
| Last Closed: | 2015-03-05 13:45:31 UTC | Type: | Bug | ||||||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||||||
| Documentation: | --- | CRM: | |||||||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||
| Embargoed: | |||||||||||||||||||
| Attachments: |
|
||||||||||||||||||
Created attachment 940368 [details]
log file for guest
Created attachment 940369 [details]
log file for guest
I cannot reproduce this one. I successfully imported a guest
using the -of qcow2 and -oa sparse options.
Can you log into the RHEV-M server, and look for a file
/var/log/ovirt-engine/engine.log
(You will have to find the engine.log* file that corresponds
to the time you were doing the import)
This file should contain the actual import error. The errors
that are printed in the GUI are generally useless.
> also failed to import Vm b-newnew to Data Center Default, Cluster Default.
Not sure what this bit means ..?
Created attachment 940644 [details]
log got from rhevm server:/var/log/ovirt-engine/engine.log
(In reply to Richard W.M. Jones from comment #4) > I cannot reproduce this one. I successfully imported a guest > using the -of qcow2 and -oa sparse options. > > Can you log into the RHEV-M server, and look for a file > > /var/log/ovirt-engine/engine.log > > (You will have to find the engine.log* file that corresponds > to the time you were doing the import) > > This file should contain the actual import error. The errors > that are printed in the GUI are generally useless. > Please see the attachment in Comment 5. > > also failed to import Vm b-newnew to Data Center Default, Cluster Default. > > Not sure what this bit means ..? I mean it's failed to import VM b-newnew, and when i check "Events" tab, can get such message: Message: Failed to import Vm b-newnew to Data Center Default, Cluster Default Now you can see this message in attachment in Comment 5. I also managed to reproduce the problem, but the error message in engine.log is useless so I can't tell what is happening either. I raised the issue on rhev-devel mailing list. Can you also run the following command and let me know the output: mkdir /tmp/mnt mount 10.66.90.115:/vol/v2v_auto/auto_export /tmp/mnt ls -lR /tmp/mnt umount /tmp/mnt (In reply to Richard W.M. Jones from comment #7) > Can you also run the following command and let me know the output: > > mkdir /tmp/mnt > mount 10.66.90.115:/vol/v2v_auto/auto_export /tmp/mnt > ls -lR /tmp/mnt > umount /tmp/mnt I tried on rhevm with another export nfs server:10.66.6.8:/var/v2v_export(created by tzheng, it's easy to check nfs configuration by using this one.) First i can reproduce the bug issue with this export nfs server: # virt-v2v -o rhev -os 10.66.6.8:/var/v2v_export --network rhevm rhel7new -on rhel7new-today2 -of qcow2 -oa sparse [ 0.0] Opening the source -i libvirt rhel7new [ 0.0] Creating an overlay to protect the source from being modified [ 0.0] Opening the overlay [ 3.0] Initializing the target -o rhev -os 10.66.6.8:/var/v2v_export virt-v2v: warning: cannot write files to the NFS server as 36:36, even though we appear to be running as root. This probably means the NFS client or idmapd is not configured properly. You will have to chown the files that virt-v2v creates after the run, otherwise RHEV-M will not be able to import the VM. [ 3.0] Inspecting the overlay [ 13.0] Checking for sufficient free disk space in the guest [ 13.0] Estimating space required on target for each disk [ 13.0] Converting Red Hat Enterprise Linux Server release 7.0 (Maipo) to run on KVM This guest has virtio drivers installed. [ 44.0] Mapping filesystem data to avoid copying unused and blank areas [ 45.0] Closing the overlay [ 45.0] Copying disk 1/1 to /tmp/v2v.rjzwux/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5 (qcow2) (100.00/100%) [ 418.0] Creating output metadata [ 418.0] Finishing off Conversion Result: successful Import Result: Failed. Then i do the following steps as you said. # mkdir /tmp/mnt # mount 10.66.6.8:/var/v2v_export /tmp/mnt/ # ls -lR /tmp/mnt/ # ls -lR /tmp/mnt/ /tmp/mnt/: total 4 -rwxr-xr-x. 1 nobody nobody 0 Sep 24 22:56 __DIRECT_IO_TEST__ drwxr-xr-x. 5 nobody nobody 4096 Sep 26 2014 e4883354-fa70-4314-bcc0-6ee12c39e3a2 /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2: total 12 drwxr-xr-x. 2 nobody nobody 4096 Sep 24 22:56 dom_md drwxr-xr-x. 3 nobody nobody 4096 Sep 26 2014 images drwxr-xr-x. 4 nobody nobody 4096 Sep 24 22:56 master /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/dom_md: total 8 -rw-rw----. 1 nobody nobody 0 Sep 24 22:56 ids -rw-rw----. 1 nobody nobody 16777216 Sep 24 22:56 inbox -rw-rw----. 1 nobody nobody 512 Sep 24 22:56 leases -rw-r--r--. 1 nobody nobody 355 Sep 24 22:56 metadata -rw-rw----. 1 nobody nobody 16777216 Sep 24 22:56 outbox /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images: total 4 drwxr-xr-x. 2 nobody nobody 4096 Sep 26 2014 5649d3a7-4025-45e6-99f9-d9682b82ee0f /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f: total 4082380 -rw-rw-rw-. 1 nobody nobody 4180475904 Sep 26 2014 f636aa62-3568-41f5-8b84-0bb69cd408a5 -rw-r--r--. 1 nobody nobody 296 Sep 26 2014 f636aa62-3568-41f5-8b84-0bb69cd408a5.meta /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master: total 8 drwxr-xr-x. 2 nobody nobody 4096 Sep 24 22:56 tasks drwxr-xr-x. 3 nobody nobody 4096 Sep 26 2014 vms /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master/tasks: total 0 /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master/vms: total 4 drwxr-xr-x. 2 nobody nobody 4096 Sep 26 2014 db2c5a2f-d6ac-4b56-ab4d-cac8f29a4844 /tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master/vms/db2c5a2f-d6ac-4b56-ab4d-cac8f29a4844: total 8 -rw-r--r--. 1 nobody nobody 4628 Sep 26 2014 db2c5a2f-d6ac-4b56-ab4d-cac8f29a4844.ovf # umount /tmp/mnt/ Later i will attach the releated log again, thanks. Created attachment 940985 [details]
new .meta log file
Created attachment 940986 [details]
new .ovf log file
Created attachment 940987 [details]
new engine log file
(In reply to zhoujunqin from comment #11) > Created attachment 940987 [details] > new engine log file Apparently engine.log is not sufficient to diagnose this problem. According to Federico: you should check the vdsm logs (of the SPM host) and search for the relevant error "low level Image copy failed" @ 2014-09-24 02:42:53 (if time is synchronized). I'm not sure I understand what the "SPM host" is. Do you have a RHEV-H server in this setup? It may .. somewhere .. have the logs that we need. Created attachment 941372 [details]
part of vdsm.log
Hi rjones,
I import the guest rhel7new-today2 on rhevm server again, and use following command running while importing:
#tailf vdsm.log
then attached it here.
From the vdsm log,it showed that qcow2 is not supported by this qemu version:
/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5 to /rhev/data-center/mnt/10.66.90.115:_vol_v2v__auto_nfs__data/946b78c7-b21e-4d88-b4ee-fee464cc4ce9/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5 DONE
dc8e8ef8-4ba3-4e69-b186-d0a96b4df605::ERROR::2014-09-26 04:04:50,205::image::772::Storage.Image::(copyCollapsed) Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/image.py", line 760, in copyCollapsed
CopyImageError: low level Image copy failed: ('General Storage Exception: (\'rc: 1, err: ["\\\'image\\\' uses a qcow2 feature which is not supported by this qemu version: QCOW version 3", "Could not open \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5\\\': Operation not supported", "Could not open \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5\\\'"]\',)',)
(In reply to tingting zheng from comment #15) > From the vdsm log,it showed that qcow2 is not supported by this qemu version: > > /00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/ > images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84- > 0bb69cd408a5 to > /rhev/data-center/mnt/10.66.90.115:_vol_v2v__auto_nfs__data/946b78c7-b21e- > 4d88-b4ee-fee464cc4ce9/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62- > 3568-41f5-8b84-0bb69cd408a5 DONE > dc8e8ef8-4ba3-4e69-b186-d0a96b4df605::ERROR::2014-09-26 > 04:04:50,205::image::772::Storage.Image::(copyCollapsed) Unexpected error > Traceback (most recent call last): > File "/usr/share/vdsm/storage/image.py", line 760, in copyCollapsed > CopyImageError: low level Image copy failed: ('General Storage Exception: > (\'rc: 1, err: ["\\\'image\\\' uses a qcow2 feature which is not supported > by this qemu version: QCOW version 3", "Could not open > \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70- > 4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62- > 3568-41f5-8b84-0bb69cd408a5\\\': Operation not supported", "Could not open > \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70- > 4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62- > 3568-41f5-8b84-0bb69cd408a5\\\'"]\',)',) Oh that's unexpected ... thanks for digging this error out. Should be fixed in virt-v2v >= 1.27.56: https://github.com/libguestfs/libguestfs/commit/b03c2a971ae66e6bfb66090b2860cfee89241f93 I can reproduce this issue as Comment 0. Then try to verify it with new build: virt-v2v-1.27.56-1.1.el7.x86_64 libguestfs-1.27.56-1.1.el7.x86_64 Steps: # virt-v2v -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export --network rhevm rhel7.0-3 -on juzhou-304 -of qcow2 -oa sparse [ 0.0] Opening the source -i libvirt rhel7.0-3 [ 0.0] Creating an overlay to protect the source from being modified [ 1.0] Opening the overlay [ 5.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export virt-v2v: warning: cannot write files to the NFS server as 36:36, even though we appear to be running as root. This probably means the NFS client or idmapd is not configured properly. You will have to chown the files that virt-v2v creates after the run, otherwise RHEV-M will not be able to import the VM. [ 5.0] Inspecting the overlay [ 15.0] Checking for sufficient free disk space in the guest [ 15.0] Estimating space required on target for each disk [ 15.0] Converting Red Hat Enterprise Linux Server release 7.0 (Maipo) to run on KVM This guest has virtio drivers installed. [ 47.0] Mapping filesystem data to avoid copying unused and blank areas [ 48.0] Closing the overlay [ 49.0] Copying disk 1/1 to /tmp/v2v.HaPpfk/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/247420f0-0904-4851-a8a1-874d980b8d5d/4c9a8bae-ce7c-4e01-aa49-02549464970a (qcow2) (100.00/100%) [ 127.0] Creating output metadata [ 127.0] Finishing off Result: Conversion successfully. Import guest successfully and guest can boot up successfully. seen above steps, move this bug from ON_QA to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0303.html |
Description of problem: Failed to import sparse qcow2 disk image after converting to rhevm. Version-Release number of selected component (if applicable): libguestfs-1.27.52-1.1.el7.x86_64 virt-v2v-1.27.52-1.1.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Prepare a health guest and in shutdown status. # virsh list --all Id Name State ---------------------------------------------------- - rhel7new shut off # qemu-img info /var/lib/libvirt/images/rhel7new.img image: /var/lib/libvirt/images/rhel7new.img file format: qcow2 virtual size: 8.0G (8589934592 bytes) disk size: 4.1G cluster_size: 65536 Format specific information: compat: 0.10 2. Using virt-v2v convert guest to rhevm server. # virt-v2v -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export --network rhevm rhel7new -on rhel7new-new [ 0.0] Opening the source -i libvirt rhel7new [ 0.0] Creating an overlay to protect the source from being modified [ 0.0] Opening the overlay [ 3.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export virt-v2v: warning: cannot write files to the NFS server as 36:36, even though we appear to be running as root. This probably means the NFS client or idmapd is not configured properly. You will have to chown the files that virt-v2v creates after the run, otherwise RHEV-M will not be able to import the VM. [ 3.0] Inspecting the overlay [ 13.0] Checking for sufficient free disk space in the guest [ 13.0] Estimating space required on target for each disk [ 13.0] Converting Red Hat Enterprise Linux Server release 7.0 (Maipo) to run on KVM This guest has virtio drivers installed. [ 43.0] Mapping filesystem data to avoid copying unused and blank areas [ 44.0] Closing the overlay [ 44.0] Copying disk 1/1 to /tmp/v2v.eOxo8k/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/57a96ac9-0abc-42fd-9ba3-5dd9a56b5e61/61c632f6-0a66-49ed-8c2f-7bbad5d4c6ab (qcow2) (100.00/100%) [ 169.0] Creating output metadata [ 169.0] Finishing off 3. Login in rhevm server using Administrator Portal, try to import domain "rhel7new-new" Actual results: Failed to import VM rhel7new-new to Data Center Default, Cluster Default. Expected results: Domain "rhel7new-new" can be imported successfully. Additional info: 1. Another to reproduce this issue: # virt-v2v -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export --network rhevm b -on b-newnew -of qcow2 -oa sparse [ 0.0] Opening the source -i libvirt b [ 0.0] Creating an overlay to protect the source from being modified [ 0.0] Opening the overlay [ 3.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export virt-v2v: warning: cannot write files to the NFS server as 36:36, even though we appear to be running as root. This probably means the NFS client or idmapd is not configured properly. You will have to chown the files that virt-v2v creates after the run, otherwise RHEV-M will not be able to import the VM. [ 3.0] Inspecting the overlay [ 12.0] Checking for sufficient free disk space in the guest [ 12.0] Estimating space required on target for each disk [ 12.0] Converting Red Hat Enterprise Linux Server release 6.6 Beta (Santiago) to run on KVM This guest has virtio drivers installed. [ 41.0] Mapping filesystem data to avoid copying unused and blank areas [ 43.0] Closing the overlay [ 43.0] Copying disk 1/1 to /tmp/v2v.r45tuJ/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/30dff79f-a296-4bcd-b4c1-8eefe2ee4f48/93070f73-9594-4a20-b10f-d4f7a2a261d2 (qcow2) (100.00/100%) [ 124.0] Creating output metadata [ 124.0] Finishing off also failed to import Vm b-newnew to Data Center Default, Cluster Default. 2. Will attach log got from rhevm server(for guest rhel7new-new).