RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1145582 - Failed to import sparse qcow2 disk image after converting to rhevm
Summary: Failed to import sparse qcow2 disk image after converting to rhevm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-23 10:26 UTC by zhoujunqin
Modified: 2015-03-05 13:45 UTC (History)
7 users (show)

Fixed In Version: libguestfs-1.27.56-1.1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 13:45:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
log file for guest (4.52 KB, text/plain)
2014-09-23 10:27 UTC, zhoujunqin
no flags Details
log file for guest (296 bytes, text/plain)
2014-09-23 10:28 UTC, zhoujunqin
no flags Details
log got from rhevm server:/var/log/ovirt-engine/engine.log (15.39 KB, text/plain)
2014-09-24 02:07 UTC, zhoujunqin
no flags Details
new .meta log file (296 bytes, text/plain)
2014-09-25 07:04 UTC, zhoujunqin
no flags Details
new .ovf log file (4.52 KB, text/plain)
2014-09-25 07:04 UTC, zhoujunqin
no flags Details
new engine log file (17.80 KB, text/plain)
2014-09-25 07:05 UTC, zhoujunqin
no flags Details
part of vdsm.log (149.35 KB, text/plain)
2014-09-26 04:12 UTC, zhoujunqin
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0303 0 normal SHIPPED_LIVE libguestfs bug fix and enhancement update 2015-03-05 17:34:44 UTC

Description zhoujunqin 2014-09-23 10:26:26 UTC
Description of problem:
Failed to import sparse qcow2 disk image after converting to rhevm.

Version-Release number of selected component (if applicable):
libguestfs-1.27.52-1.1.el7.x86_64
virt-v2v-1.27.52-1.1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a health guest and in shutdown status.
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     rhel7new                       shut off

# qemu-img info /var/lib/libvirt/images/rhel7new.img
image: /var/lib/libvirt/images/rhel7new.img
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 4.1G
cluster_size: 65536
Format specific information:
    compat: 0.10

2. Using virt-v2v convert guest to rhevm server.

# virt-v2v -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export --network rhevm rhel7new -on rhel7new-new
[   0.0] Opening the source -i libvirt rhel7new
[   0.0] Creating an overlay to protect the source from being modified
[   0.0] Opening the overlay
[   3.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export
virt-v2v: warning: cannot write files to the NFS server as 36:36, even
though we appear to be running as root. This probably means the NFS client
or idmapd is not configured properly.

You will have to chown the files that virt-v2v creates after the run,
otherwise RHEV-M will not be able to import the VM.
[   3.0] Inspecting the overlay
[  13.0] Checking for sufficient free disk space in the guest
[  13.0] Estimating space required on target for each disk
[  13.0] Converting Red Hat Enterprise Linux Server release 7.0 (Maipo) to run on KVM
This guest has virtio drivers installed.
[  43.0] Mapping filesystem data to avoid copying unused and blank areas
[  44.0] Closing the overlay
[  44.0] Copying disk 1/1 to /tmp/v2v.eOxo8k/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/57a96ac9-0abc-42fd-9ba3-5dd9a56b5e61/61c632f6-0a66-49ed-8c2f-7bbad5d4c6ab (qcow2)
    (100.00/100%)
[ 169.0] Creating output metadata
[ 169.0] Finishing off

3.  Login in rhevm server using Administrator Portal, try to import domain "rhel7new-new"

Actual results:
Failed to import VM rhel7new-new to Data Center Default, Cluster Default.

Expected results:
Domain "rhel7new-new" can be imported successfully.

Additional info:
1. Another to reproduce this issue:
# virt-v2v -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export --network rhevm b -on b-newnew -of qcow2 -oa sparse
[   0.0] Opening the source -i libvirt b
[   0.0] Creating an overlay to protect the source from being modified
[   0.0] Opening the overlay
[   3.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export
virt-v2v: warning: cannot write files to the NFS server as 36:36, even
though we appear to be running as root. This probably means the NFS client
or idmapd is not configured properly.

You will have to chown the files that virt-v2v creates after the run,
otherwise RHEV-M will not be able to import the VM.
[   3.0] Inspecting the overlay
[  12.0] Checking for sufficient free disk space in the guest
[  12.0] Estimating space required on target for each disk
[  12.0] Converting Red Hat Enterprise Linux Server release 6.6 Beta (Santiago) to run on KVM
This guest has virtio drivers installed.
[  41.0] Mapping filesystem data to avoid copying unused and blank areas
[  43.0] Closing the overlay
[  43.0] Copying disk 1/1 to /tmp/v2v.r45tuJ/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/30dff79f-a296-4bcd-b4c1-8eefe2ee4f48/93070f73-9594-4a20-b10f-d4f7a2a261d2 (qcow2)
    (100.00/100%)
[ 124.0] Creating output metadata
[ 124.0] Finishing off

also failed to import Vm b-newnew to Data Center Default, Cluster Default.

2. Will attach log got from rhevm server(for guest rhel7new-new).

Comment 2 zhoujunqin 2014-09-23 10:27:47 UTC
Created attachment 940368 [details]
log file for guest

Comment 3 zhoujunqin 2014-09-23 10:28:16 UTC
Created attachment 940369 [details]
log file for guest

Comment 4 Richard W.M. Jones 2014-09-23 15:29:40 UTC
I cannot reproduce this one.  I successfully imported a guest
using the -of qcow2 and -oa sparse options.

Can you log into the RHEV-M server, and look for a file

  /var/log/ovirt-engine/engine.log

(You will have to find the engine.log* file that corresponds
to the time you were doing the import)

This file should contain the actual import error.  The errors
that are printed in the GUI are generally useless.

> also failed to import Vm b-newnew to Data Center Default, Cluster Default.

Not sure what this bit means ..?

Comment 5 zhoujunqin 2014-09-24 02:07:04 UTC
Created attachment 940644 [details]
log got from rhevm server:/var/log/ovirt-engine/engine.log

Comment 6 zhoujunqin 2014-09-24 02:12:26 UTC
(In reply to Richard W.M. Jones from comment #4)
> I cannot reproduce this one.  I successfully imported a guest
> using the -of qcow2 and -oa sparse options.
> 
> Can you log into the RHEV-M server, and look for a file
> 
>   /var/log/ovirt-engine/engine.log
> 
> (You will have to find the engine.log* file that corresponds
> to the time you were doing the import)
> 
> This file should contain the actual import error.  The errors
> that are printed in the GUI are generally useless.
> 

Please see the attachment in Comment 5.

> > also failed to import Vm b-newnew to Data Center Default, Cluster Default.
> 
> Not sure what this bit means ..?
I mean it's failed to import VM b-newnew, and when i check "Events" tab, can get such message:
Message: Failed to import Vm b-newnew to Data Center Default, Cluster Default

Now you can see this message in attachment in Comment 5.

Comment 7 Richard W.M. Jones 2014-09-24 14:48:31 UTC
I also managed to reproduce the problem, but the error message in
engine.log is useless so I can't tell what is happening either.  I
raised the issue on rhev-devel mailing list.

Can you also run the following command and let me know the output:

mkdir /tmp/mnt
mount 10.66.90.115:/vol/v2v_auto/auto_export /tmp/mnt
ls -lR /tmp/mnt
umount /tmp/mnt

Comment 8 zhoujunqin 2014-09-25 07:02:34 UTC
(In reply to Richard W.M. Jones from comment #7)
> Can you also run the following command and let me know the output:
> 
> mkdir /tmp/mnt
> mount 10.66.90.115:/vol/v2v_auto/auto_export /tmp/mnt
> ls -lR /tmp/mnt
> umount /tmp/mnt

I tried on rhevm with another export nfs server:10.66.6.8:/var/v2v_export(created by tzheng, it's easy to check nfs configuration by using this one.)
First i can reproduce the bug issue with this export nfs server:

# virt-v2v -o rhev -os 10.66.6.8:/var/v2v_export  --network rhevm rhel7new -on rhel7new-today2 -of qcow2 -oa sparse [   0.0] Opening the source -i libvirt rhel7new
[   0.0] Creating an overlay to protect the source from being modified
[   0.0] Opening the overlay
[   3.0] Initializing the target -o rhev -os 10.66.6.8:/var/v2v_export
virt-v2v: warning: cannot write files to the NFS server as 36:36, even 
though we appear to be running as root. This probably means the NFS client 
or idmapd is not configured properly.

You will have to chown the files that virt-v2v creates after the run, 
otherwise RHEV-M will not be able to import the VM.
[   3.0] Inspecting the overlay
[  13.0] Checking for sufficient free disk space in the guest
[  13.0] Estimating space required on target for each disk
[  13.0] Converting Red Hat Enterprise Linux Server release 7.0 (Maipo) to run on KVM
This guest has virtio drivers installed.
[  44.0] Mapping filesystem data to avoid copying unused and blank areas
[  45.0] Closing the overlay
[  45.0] Copying disk 1/1 to /tmp/v2v.rjzwux/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5 (qcow2)
    (100.00/100%)
[ 418.0] Creating output metadata
[ 418.0] Finishing off

Conversion Result: successful
Import Result: Failed.

Then i do the following steps as you said.
# mkdir /tmp/mnt
# mount 10.66.6.8:/var/v2v_export /tmp/mnt/
# ls -lR /tmp/mnt/
# ls -lR /tmp/mnt/
/tmp/mnt/:
total 4
-rwxr-xr-x. 1 nobody nobody    0 Sep 24 22:56 __DIRECT_IO_TEST__
drwxr-xr-x. 5 nobody nobody 4096 Sep 26  2014 e4883354-fa70-4314-bcc0-6ee12c39e3a2

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2:
total 12
drwxr-xr-x. 2 nobody nobody 4096 Sep 24 22:56 dom_md
drwxr-xr-x. 3 nobody nobody 4096 Sep 26  2014 images
drwxr-xr-x. 4 nobody nobody 4096 Sep 24 22:56 master

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/dom_md:
total 8
-rw-rw----. 1 nobody nobody        0 Sep 24 22:56 ids
-rw-rw----. 1 nobody nobody 16777216 Sep 24 22:56 inbox
-rw-rw----. 1 nobody nobody      512 Sep 24 22:56 leases
-rw-r--r--. 1 nobody nobody      355 Sep 24 22:56 metadata
-rw-rw----. 1 nobody nobody 16777216 Sep 24 22:56 outbox

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images:
total 4
drwxr-xr-x. 2 nobody nobody 4096 Sep 26  2014 5649d3a7-4025-45e6-99f9-d9682b82ee0f

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f:
total 4082380
-rw-rw-rw-. 1 nobody nobody 4180475904 Sep 26  2014 f636aa62-3568-41f5-8b84-0bb69cd408a5
-rw-r--r--. 1 nobody nobody        296 Sep 26  2014 f636aa62-3568-41f5-8b84-0bb69cd408a5.meta

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master:
total 8
drwxr-xr-x. 2 nobody nobody 4096 Sep 24 22:56 tasks
drwxr-xr-x. 3 nobody nobody 4096 Sep 26  2014 vms

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master/tasks:
total 0

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master/vms:
total 4
drwxr-xr-x. 2 nobody nobody 4096 Sep 26  2014 db2c5a2f-d6ac-4b56-ab4d-cac8f29a4844

/tmp/mnt/e4883354-fa70-4314-bcc0-6ee12c39e3a2/master/vms/db2c5a2f-d6ac-4b56-ab4d-cac8f29a4844:
total 8
-rw-r--r--. 1 nobody nobody 4628 Sep 26  2014 db2c5a2f-d6ac-4b56-ab4d-cac8f29a4844.ovf

# umount /tmp/mnt/

Later i will attach the releated log again, thanks.

Comment 9 zhoujunqin 2014-09-25 07:04:01 UTC
Created attachment 940985 [details]
new .meta log file

Comment 10 zhoujunqin 2014-09-25 07:04:31 UTC
Created attachment 940986 [details]
new .ovf log file

Comment 11 zhoujunqin 2014-09-25 07:05:15 UTC
Created attachment 940987 [details]
new engine log file

Comment 12 Richard W.M. Jones 2014-09-25 14:18:20 UTC
(In reply to zhoujunqin from comment #11)
> Created attachment 940987 [details]
> new engine log file

Apparently engine.log is not sufficient to diagnose this
problem.

According to Federico:

  you should check the vdsm logs (of the SPM host) and search
  for the relevant error "low level Image copy failed" @ 2014-09-24
  02:42:53 (if time is synchronized).

I'm not sure I understand what the "SPM host" is.  Do you have
a RHEV-H server in this setup?  It may .. somewhere .. have the
logs that we need.

Comment 14 zhoujunqin 2014-09-26 04:12:56 UTC
Created attachment 941372 [details]
part of vdsm.log

Hi rjones,
I import the guest rhel7new-today2 on rhevm server again, and use following command running while importing:
#tailf  vdsm.log

then attached it here.

Comment 15 tingting zheng 2014-09-26 05:38:01 UTC
From the vdsm log,it showed that qcow2 is not supported by this qemu version:

/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5 to /rhev/data-center/mnt/10.66.90.115:_vol_v2v__auto_nfs__data/946b78c7-b21e-4d88-b4ee-fee464cc4ce9/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5 DONE
dc8e8ef8-4ba3-4e69-b186-d0a96b4df605::ERROR::2014-09-26 04:04:50,205::image::772::Storage.Image::(copyCollapsed) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 760, in copyCollapsed
CopyImageError: low level Image copy failed: ('General Storage Exception: (\'rc: 1, err: ["\\\'image\\\' uses a qcow2 feature which is not supported by this qemu version: QCOW version 3", "Could not open \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5\\\': Operation not supported", "Could not open \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-0bb69cd408a5\\\'"]\',)',)

Comment 16 Richard W.M. Jones 2014-09-26 07:38:31 UTC
(In reply to tingting zheng from comment #15)
> From the vdsm log,it showed that qcow2 is not supported by this qemu version:
> 
> /00000002-0002-0002-0002-0000000002c4/e4883354-fa70-4314-bcc0-6ee12c39e3a2/
> images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-3568-41f5-8b84-
> 0bb69cd408a5 to
> /rhev/data-center/mnt/10.66.90.115:_vol_v2v__auto_nfs__data/946b78c7-b21e-
> 4d88-b4ee-fee464cc4ce9/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-
> 3568-41f5-8b84-0bb69cd408a5 DONE
> dc8e8ef8-4ba3-4e69-b186-d0a96b4df605::ERROR::2014-09-26
> 04:04:50,205::image::772::Storage.Image::(copyCollapsed) Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/image.py", line 760, in copyCollapsed
> CopyImageError: low level Image copy failed: ('General Storage Exception:
> (\'rc: 1, err: ["\\\'image\\\' uses a qcow2 feature which is not supported
> by this qemu version: QCOW version 3", "Could not open
> \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-
> 4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-
> 3568-41f5-8b84-0bb69cd408a5\\\': Operation not supported", "Could not open
> \\\'/rhev/data-center/00000002-0002-0002-0002-0000000002c4/e4883354-fa70-
> 4314-bcc0-6ee12c39e3a2/images/5649d3a7-4025-45e6-99f9-d9682b82ee0f/f636aa62-
> 3568-41f5-8b84-0bb69cd408a5\\\'"]\',)',)

Oh that's unexpected ...  thanks for digging this error out.

Comment 17 Richard W.M. Jones 2014-09-26 07:50:00 UTC
See also:
https://bugzilla.redhat.com/show_bug.cgi?id=1139707

Comment 18 Richard W.M. Jones 2014-09-26 18:28:20 UTC
Should be fixed in virt-v2v >= 1.27.56:

https://github.com/libguestfs/libguestfs/commit/b03c2a971ae66e6bfb66090b2860cfee89241f93

Comment 20 zhoujunqin 2014-09-30 05:51:56 UTC
I can reproduce this issue as Comment 0.
Then try to verify it with new build:
virt-v2v-1.27.56-1.1.el7.x86_64
libguestfs-1.27.56-1.1.el7.x86_64

Steps:
# virt-v2v  -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export  --network rhevm rhel7.0-3 -on juzhou-304 -of qcow2 -oa sparse
[   0.0] Opening the source -i libvirt rhel7.0-3
[   0.0] Creating an overlay to protect the source from being modified
[   1.0] Opening the overlay
[   5.0] Initializing the target -o rhev -os 10.66.90.115:/vol/v2v_auto/auto_export
virt-v2v: warning: cannot write files to the NFS server as 36:36, even 
though we appear to be running as root. This probably means the NFS client 
or idmapd is not configured properly.

You will have to chown the files that virt-v2v creates after the run, 
otherwise RHEV-M will not be able to import the VM.
[   5.0] Inspecting the overlay
[  15.0] Checking for sufficient free disk space in the guest
[  15.0] Estimating space required on target for each disk
[  15.0] Converting Red Hat Enterprise Linux Server release 7.0 (Maipo) to run on KVM
This guest has virtio drivers installed.
[  47.0] Mapping filesystem data to avoid copying unused and blank areas
[  48.0] Closing the overlay
[  49.0] Copying disk 1/1 to /tmp/v2v.HaPpfk/46adae8a-63c1-40f8-b25a-f02deb1a5160/images/247420f0-0904-4851-a8a1-874d980b8d5d/4c9a8bae-ce7c-4e01-aa49-02549464970a (qcow2)
    (100.00/100%)
[ 127.0] Creating output metadata
[ 127.0] Finishing off


Result:
Conversion successfully.
Import guest successfully and guest can boot up successfully.
seen above steps, move this bug from ON_QA to VERIFIED.

Comment 22 errata-xmlrpc 2015-03-05 13:45:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0303.html


Note You need to log in before you can comment on or make changes to this bug.