Bug 647029 - virt-v2v creates an ovf file with wrong parameters for RAW SPARS disks
Summary: virt-v2v creates an ovf file with wrong parameters for RAW SPARS disks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: virt-v2v
Version: 5.5
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: rc
: ---
Assignee: Matthew Booth
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 683473 (view as bug list)
Depends On:
Blocks: 683473
TreeView+ depends on / blocked
 
Reported: 2010-10-26 21:18 UTC by Vladik Romanovsky
Modified: 2018-11-14 16:47 UTC (History)
21 users (show)

Fixed In Version: virt-v2v-0.6.3-5.el5
Doc Type: Bug Fix
Doc Text:
When converting guests that have RAW SPARSE virtual disks for output to Red Hat Enterprise Virtualization, virt-v2v could create OVF metadata that misstates the size of the disk image. These disks could be imported into Red Hat Enterprise Virtualization, but could not be used as the basis for a template. With this update, virt-v2v now records the correct size of RAW SPARSE virtual disks in the OVF metadata, and guests that have these disks can be imported and used as the basis for templates.
Clone Of:
Environment:
Last Closed: 2011-08-05 17:38:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
example of RHEV not allowing import of incorrect ovf file (131.97 KB, image/png)
2010-12-22 17:21 UTC, John Brier
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:1125 0 normal SHIPPED_LIVE virt-v2v and libguestfs enhancement update 2011-08-05 17:38:28 UTC

Description Vladik Romanovsky 2010-10-26 21:18:37 UTC
Hi,

All RAW sparse files are being copied using dd, eventually, these disk will become a preallocated raw.
However, these disks will be written in the ovf files as a sparse disks (wrong type and wrong size)

Although these vms can be imported to RHEV-M, creation of a template from such VM will fail because of a wrong data in the ovf

Thanks,

Vladik

Comment 1 Matthew Booth 2010-10-27 10:20:22 UTC
I don't understand why this should fail, as a previously sparse disk can become fully allocated over time. Can you give a specific example of incorrect metadata?

Comment 2 Vladik Romanovsky 2010-10-27 11:35:23 UTC
(In reply to comment #1)
> I don't understand why this should fail, as a previously sparse disk can become
> fully allocated over time. Can you give a specific example of incorrect
> metadata?

Hi,

We have spoken about it over IRC already.

As I explained in BZ:639689, RAW SPARSE files will be fully preallocated after dd, however, its metadata will state that this disk 1. is SPARSE RAW 2. has a capacity != apparentsize

VM that has been exported using a V2V tool will can be configured with a raw
sparse disks. 

getVolumeInfo:

'status': 'OK'
'domain': 'a10d14fc-68da-43e8-b2be-c3b6d2aec47d'
'truesize': '42949672960'
'voltype': 'LEAF'
'uuid': '735acd33-7bc5-4a9c-8ab6-e04787794263'
'parent': '00000000-0000-0000-0000-000000000000'
'format': 'RAW'
'description': 'Exported by virt-v2v'
'children': []
'ctime': '1284530851'
'disktype': '1'
'legality': 'LEGAL'
'mtime': '1285064579'
'capacity': '21474836480'
'apparentsize': '42949672960'
'type': 'SPARSE'
'image': '1aa94d16-2778-4a6c-8d30-5884d0925976'
'pool': 'ec6e3624-9707-4c70-9010-86606fc6707e'


The above, creates a sizing issues 
'capacity': '21474836480' while 'truesize': '42949672960'

and an attempt to create a template from such disk will fail.

Importing such disk to a block device domain, will create a LV with
42949672960, however, the metadata will state that its size is 21474836480. In
the future, it will lead for a template creation failure.

An initial size of the destination lv is 0.5G + the lv extend operation (--size 21541M): 

/usr/sbin/lvextend --config " devices { preferred_names = [\\"^/dev/mapper/\\"] write_cache_state=0 filter = [ \\"a%/dev/mapper/36000eb3b2d70381d00000000000003ae%\\", \\"r%.*%\\" ] }  backup {  retain_min = 50  retain_days = 0 } " --autobackup n --size 21541M a10d14fc-68da-43e8-b2be-c3b6d2aec47d/0ada9754-95cf-49a9-8161-3855518da95d' (cwd None)

makes the lv be equal the capacity and not the apparentsize, as it should.

And it will all result in:
qemu-img: Error while formatting /rhev/data-center/ec6e3624-9707-4c70-9010-86606fc6707e/a10d14fc-68da-43e8-b2be-c3b6d2aec47d/images/243120f6-2c24-46b8-b6aa-504d603765b7/1ff8ab3e-1a7e-4b4f-94e9-1b16156873e2\

Thanks,

Vladik

Comment 3 John Brier 2010-10-27 16:35:34 UTC
Hi, just to explain the importance of this issue, we have three cases related to it:


00336824 [RHEV]template creation fails
00363858 Cannot Create Templates
00336417 Template creation fails when creating a template of a KVm guest that was ported over using virt v2v

I have attached the last two SFDC tickets to this BZ, the first is attached to a related BZ (you can't attach an SFDC to more than one BZ)

Bug 639689 - RHEVM should not allow to import a VM with a sparse raw disk to a block device domain

Reading that BZ may provide some more context to this problem.

Comment 4 Ayal Baron 2010-10-27 21:22:42 UTC
The incorrect data is the size.
Seeing as the VM is copied entirely, it makes no sense marking it as sparse anyway.
What is needed is for the v2v tool to correct the size and mark the VM as preallocated, this shouldn't be a big deal.

Comment 5 Matthew Booth 2010-10-28 10:45:23 UTC
(In reply to comment #4)
> The incorrect data is the size.

Right. Is there an example of the incorrect size in a OVF/.meta file created by V2V? I need to know what to fix.

> Seeing as the VM is copied entirely, it makes no sense marking it as sparse
> anyway.
> What is needed is for the v2v tool to correct the size and mark the VM as
> preallocated, this shouldn't be a big deal.

The next version of V2V supports sparse files properly any way. However, I don't think I've changed any of my own assumptions about the meanings of various sizing numbers. I really need to know which size number is incorrect.

I do recall, however, that the usage information written to an OVF is in GB, which is obviously exceptionally imprecise. If you have tooling which requires this number to be correct, V2V will have to pad volumes up to a 1GB boundary. Is this related?

Comment 6 Ayal Baron 2010-12-09 13:38:19 UTC
Just saw another case of this.
Capacity (which reflects the virtual disk size) is 6GB but apparent size is over 10 times as much.  This makes no sense when dealing with raw files.  The raw file size should equal the disk size (as viewed by the guest).

Comment 7 John Brier 2010-12-21 23:35:48 UTC
I tried to get mbooth an example of the incorrect metadata today but I think I failed..

[root@jb ~]# virt-v2v -o rhev -osd dhcp53-194.gsslab.rdu.redhat.com:/rhev-exports --network rehvm rhel4-2
rhel4-2:  60% [=============*=========================                          ]8m52s Lrhel4-2:  60% [==============*========================                          ]8m51s Lrhel4-2:  60% [================*======================                          ]8m50s Lrhel4-2:  60% [==================*====================                          ]8m49s Lrhel4-2:  60% [===================*===================                          ]8m48s Lrhel4-2:  60% [=====================*=================                          ]8m48s rhel4-2:  60% [=======================*===============                          ]8m47s rhel4-2:  60% [========================*==============                          ]8m46s rhel4-2:  60% [==========================*============                          ]8m46s rhel4-2:  60% [============================*==========                          ]8m45s rhel4-2:  60% [=============================*=========                          ]8m44s rhel4-2:  60% [===============================*=======                          ]8m43s rhel4-2:  60% [=================================*=====                          ]8m42s rhel4-2:  60% [==================================*====                          ]8m41s rhel4-2:  60% [====================================*==                          ]8m40s rhel4-2:  60% [=====================================*=                          ]8m40s rhel4-2:  60% [=======================================*                         ]8m39s rhel4-2:  60% [=======================================  *                       ]8m45s rhel4-2:  61% [=======================================   *                      ]8m44s rhel4-2:  61% [=======================================     *                    ]8m43s rhel4-2:  61% [=======================================       *                  ]8m42s rhel4-2:  61% [=======================================        *                 ]8m41s rhel4-2:  61% [=======================================          *               ]8m41s rhel4-2:  61% [=======================================            *             ]8m40s rhel4-2:  61% [=======================================             *            ]8m39s rhel4-2:  61% [=======================================               *          ]8m38s rhel4-2:  61% [=======================================                 *        ]8m37s rhel4-2:  61% [=======================================                  *       ]8m36s rhel4-2:  61% [=======================================                    *     ]8m35s rhel4-2:  61% [=======================================                      *   ]8m35s rhel4-2:  61% [=======================================                       *  ]8m34s rhel4-2:  61% [=======================================                         *]8m33s rhel4-2:  61% [=*======================================                         ]8m32s rhel4-2:  61% [==*=====================================                         ]8m31s rhel4-2:  61% [====*===================================                         ]8m30s rhel4-2:  61% [=====*==================================                         ]8m30s rhel4-2:  61% [=======*================================                         ]8m29s rhel4-2:  61% [=========*==============================                         ]8m29s rhel4-2:  61% [==========*=============================                         ]8m28s rhel4-2:  61% [============*===========================                         ]8m27s rhel4-2:  61% [==============*=========================                         ]8m26s rhel4-2:  61% [===============*========================                         ]8m25s rhel4-2:  61% [=================*======================                         ]8m24s rhel4-2:  61% [===================*====================                         ]8m23s rhel4-2:  62% [====================*===================                         ]8m23s rhel4-2:  62% [======================*=================                         ]8m22s rhel4-2:  62% [========================*===============                         ]8m21s rhel4-2:  62% [=========================*==============                         ]8m20s rhel4-2:  62% [===========================*============                         ]8m19s rhel4-2:  62% [=============================*==========                         ]8m18s rhel4-2:  62% [==============================*=========                         ]8m18s rhel4-2:  62% [================================*=======                         ]8m17s rhel4-2:  62% [==================================*=====                         ]8m16s rhel4-2:  62% [===================================*====                         ]8m15s rhel4-2:  62% [=====================================*==                         ]8m14s rhel4-2:  62% [======================================*=                         ]8m14s rhel4-2:  62% [========================================*                        ]8m13s rhel4-2:  62% [========================================  *                      ]8m12s rhel4-2:  62% [========================================   *                     ]8m12s rhel4-2:  62% [========================================     *                   ]8m11s rhel4-2:  62% [========================================       *                 ]8m11s rhel4-2:  62% [========================================        *                ]8m10s rhel4-2:  62% [========================================          *              ]8m09s rhel4-2:  62% [========================================            *            ]8m08s rhel4-2:  62% [========================================             *           ]8m07s rhel4-2:  62% [========================================               *         ]8m07s rhel4-2:  62% [========================================                 *       ]8m06s rhel4-2:  62% [========================================                  *      ]8m10s Lerhel4-2:  62% [========================================                    *    ]8m09s Lerhel4-2:  63% [========================================                      *  ]8m08s Lerhel4-2:  63% [========================================                       * ]8m08s Lerhel4-2:  63% [*========================================                        ]8m07s Lerhel4-2:  63% [==*======================================                        ]8m06s Lerhel4-2:  63% [===*=====================================                        ]8m05s Lerhel4-2:  63% [=====*===================================                        ]8m07s Lerhel4-2:  63% [======*==================================                        ]8m06s Lerhel4-2:  63% [========*================================                        ]8m05s Lerhel4-2:  63% [==========*==============================                        ]8m04s Lerhel4-2:  63% [===========*=============================                        ]8m04s Lerhel4-2:  63% [=============*===========================                        ]8m03s Lerhel4-2:  63% [===============*=========================                        ]8m05s Lerhel4-2:  63% [================*========================                        ]8m05s Lerhel4-2:  63% [==================*======================                        ]8m04s Lerhel4-2: 100% [=================================================================]D 0h21m19s
Can't use string ("cannot chdir to /root from /tmp/"...) as a HASH ref while "strict refs" in use at /usr/share/perl5/Sys/VirtV2V/Connection/RHEVTarget.pm line 396.
virt-v2v: Error removing /tmp/9de4rcJfw9/0e24d6b7-9163-407e-a10b-958f963b9bc0/v2v.p4KJ9Eit: cannot remove path when cwd is /tmp/9de4rcJfw9/0e24d6b7-9163-407e-a10b-958f963b9bc0/v2v.p4KJ9Eit
virt-v2v: Unable to remove temporary directory /tmp/9de4rcJfw9/0e24d6b7-9163-407e-a10b-958f963b9bc0/v2v.p4KJ9Eit
virt-v2v: Failed to unmount dhcp53-194.gsslab.rdu.redhat.com:/rhev-exports. Command exited with status 16. Output was: umount.nfs: /tmp/9de4rcJfw9: device is busy
umount.nfs: /tmp/9de4rcJfw9: device is busy

virt-v2v: Failed to remove mount directory /tmp/9de4rcJfw9: Device or resource busy
[root@jb ~]# rpm -q virt-v2v
virt-v2v-0.7.0-1.fc14.x86_64


However, I did check the following file


root@jb tmp]# cd 9de4rcJfw9/
[root@jb 9de4rcJfw9]# ls
0e24d6b7-9163-407e-a10b-958f963b9bc0
[root@jb 9de4rcJfw9]# find . -name '*ovf*'
[root@jb 9de4rcJfw9]# vi 0e24d6b7-9163-407e-a10b-958f963b9bc0/
dom_md/       images/       master/       v2v.CPP1oGAS/ v2v.p4KJ9Eit/ 
[root@jb 9de4rcJfw9]# vi 0e24d6b7-9163-407e-a10b-958f963b9bc0/images/
6417fef6-dd57-449b-84db-e5010608932c/ fe5b0c9c-f6cf-46e4-a621-07fb099298b8/
[root@jb 9de4rcJfw9]# cat 0e24d6b7-9163-407e-a10b-958f963b9bc0/images/6417fef6-dd57-449b-84db-e5010608932c/2c030e76-4e3c-4096-a084-ab29fcd5a55c.meta 
DOMAIN=0e24d6b7-9163-407e-a10b-958f963b9bc0
VOLTYPE=LEAF
CTIME=1292960342
FORMAT=RAW
IMAGE=6417fef6-dd57-449b-84db-e5010608932c
DISKTYPE=1
PUUID=00000000-0000-0000-0000-000000000000
LEGALITY=LEGAL
MTIME=1292960342
POOL_UUID=00000000-0000-0000-0000-000000000000
SIZE=10485760
TYPE=PREALLOCATED
DESCRIPTION=Exported by virt-v2v
EOF


It looks like it's set to RAW/Preallocated.. which is correct. This is on F14.. not RHEL 5 so I need to test on RHEL 5..  I couldn't find an ovf file at all. mabye because the import failed..

Is it this simple to get Matthew the data he needs? Sorry for coming in this bug with half information but I feel like it shouldn't be hard to get Matthew the information he needs to fix this issue, and we have other customers hitting it.. we fixed it in RHEV so RHEV won't import the wrong metadata, but we haven't fixed it in the tool that creates it..

Comment 9 Matthew Booth 2010-12-22 10:08:08 UTC
(In reply to comment #7)
> I tried to get mbooth an example of the incorrect metadata today but I think I
> failed..

I think I have enough to go on for the moment. I'll talk to Ayal directly if I need more info. Thanks.

Comment 10 John Brier 2010-12-22 16:48:58 UTC
(In reply to comment #9)
> (In reply to comment #7)
> > I tried to get mbooth an example of the incorrect metadata today but I think I
> > failed..
> 
> I think I have enough to go on for the moment. I'll talk to Ayal directly if I
> need more info. Thanks.

I know you have enough info, but for my own edification, and for others I wanted to test this out..

[root@dvirt qemu]# rpm -q virt-v2v
virt-v2v-0.6.1-2.el5


[root@dvirt qemu]# virt-v2v -o rhev -osd dhcp53-194.gsslab.rdu.redhat.com:/rhev-exports --network rhevm lvs-dir1
virt-v2v: lvs-dir1 configured with virtio drivers



Produces a VM that RHEV fails to import with the following error:

"Cannot imported VM. The selected disk configuration is not supported." 

See attached /data/failed-import-rhevm-2.2.4.51976.png

If I modify the ovf file from RAW/SPARSE to RAW/PREALLOCATED as so, it works:

[root@dhcp53-194 rhev-exports]# ls
dac1835a-d58e-4d87-b70f-d820861b0a4b

[root@dhcp53-194 rhev-exports]# cd dac1835a-d58e-4d87-b70f-d820861b0a4b/

[root@dhcp53-194 dac1835a-d58e-4d87-b70f-d820861b0a4b]# ls
dom_md  images  master  v2v.ayvt6Zx5

[root@dhcp53-194 dac1835a-d58e-4d87-b70f-d820861b0a4b]# find . -name '*ovf*'
./master/vms/754ca0a9-c630-4c0b-9028-e7fd3ffab887/754ca0a9-c630-4c0b-9028-e7fd3ffab887.ovf

[root@dhcp53-194 dac1835a-d58e-4d87-b70f-d820861b0a4b]# cp master/vms/754ca0a9-c630-4c0b-9028-e7fd3ffab887/754ca0a9-c630-4c0b-9028-e7fd3ffab887.ovf /root/

[root@dhcp53-194 dac1835a-d58e-4d87-b70f-d820861b0a4b]# vi master/vms/754ca0a9-c630-4c0b-9028-e7fd3ffab887/754ca0a9-c630-4c0b-9028-e7fd3ffab887.ovf 

[root@dhcp53-194 dac1835a-d58e-4d87-b70f-d820861b0a4b]# diff master/vms/754ca0a9-c630-4c0b-9028-e7fd3ffab887/754ca0a9-c630-4c0b-9028-e7fd3ffab887.ovf /root/754ca0a9-c630-4c0b-9028-e7fd3ffab887.ovf 
11c11
<     <Disk ovf:diskId="959a9065-8f68-4de5-8263-53c099510505" ovf:size="6" ovf:actual_size="6" ovf:fileRef="36775daf-6cc7-4a39-a7be-a44bc11ace37/959a9065-8f68-4de5-8263-53c099510505" ovf:parentRef="" ovf:vm_snapshot_id="00000000-0000-0000-0000-000000000000" ovf:volume-format="RAW" ovf:volume-type="Preallocated" ovf:format="http://en.wikipedia.org/wiki/Byte" ovf:disk-interface="2" ovf:boot="True"/></Section>
---
>     <Disk ovf:diskId="959a9065-8f68-4de5-8263-53c099510505" ovf:size="6" ovf:actual_size="6" ovf:fileRef="36775daf-6cc7-4a39-a7be-a44bc11ace37/959a9065-8f68-4de5-8263-53c099510505" ovf:parentRef="" ovf:vm_snapshot_id="00000000-0000-0000-0000-000000000000" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:format="http://en.wikipedia.org/wiki/Byte" ovf:disk-interface="2" ovf:boot="True"/></Section>
[root@dhcp53-194 dac1835a-d58e-4d87-b70f-d820861b0a4b]#

Comment 11 John Brier 2010-12-22 17:21:41 UTC
Created attachment 470268 [details]
example of RHEV not allowing import of incorrect ovf file

Comment 12 Huang Wenlong 2011-02-12 06:47:12 UTC
verify this bug in rhel5 : 
virt-v2v-0.6.3-5.el5
virtio-win-1.0.1-2.52454.el5
libvirt-0.8.2-15.el5_6.1
libvirt-python-0.8.2-15.el5_6.1
rhev-m ic91 
rhevh-5.6-8.1




# virt-v2v -ic xen+ssh://10.66.72.123 -o rhev -osd 10.66.90.115:/vol/v2vrwu1/xen_export -n rhevm rhel5u6-64b-pv-sparse

before conversion: 
# ll -sh /var/lib/libvirt/images/rhel5u6-64b-pv-sparse.img
2.6G -rw-r--r-- 1 root root 5.6G Dec 22 13:21 /var/lib/libvirt/images/rhel5u6-64b-pv-sparse.img

the disk is sparse.


after conversion :

#ll -sh ./images/6c1b2cc5-cbda-491e-ac3d-ff16bd354b6b/d729e5eb-0f59-4bf9-8be8-dd247be8e54b
5.6G -rw-r--r-- 1 36 kvm 5.6G Feb 12 06:44 ./images/6c1b2cc5-cbda-491e-ac3d-ff16bd354b6b/d729e5eb-0f59-4bf9-8be8-dd247be8e54b

#cat ./images/6c1b2cc5-cbda-491e-ac3d-ff16bd354b6b/d729e5eb-0f59-4bf9-8be8-dd247be8e54b.meta
DOMAIN=269808c9-c9e8-49f7-8f45-f9a7c5c6e95a
VOLTYPE=LEAF
CTIME=1297491817
FORMAT=RAW
IMAGE=6c1b2cc5-cbda-491e-ac3d-ff16bd354b6b
DISKTYPE=1
PUUID=00000000-0000-0000-0000-000000000000
LEGALITY=LEGAL
MTIME=1297491817
POOL_UUID=00000000-0000-0000-0000-000000000000
SIZE=11718751
TYPE=PREALLOCATED
DESCRIPTION=Exported by virt-v2v
EOF



the disk is preallocated and ovf is correct

Comment 13 Florian Nadge 2011-02-16 11:01:43 UTC
Florian Nadge 2011-01-19 10:40:58 EST

Please be so kind and add a few key words to the technical note of this
bugzilla entry using the following structure:

Cause:

Consequence:

Fix:

Result:


For details, see:
https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes

Thanks

Comment 14 Florian Nadge 2011-02-16 11:01:43 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause
    What actions or circumstances induced the feature request.
Consequence
    What action was inhibited by the feature’s absence.
Change
    What was done to implement the feature.
    Note: backported from upstream is not an explanation.
Result
    What now happens when the actions or circumstances above occur.
    Note: this is not the same as the feature request was fulfilled.

Comment 15 David Jorm 2011-02-22 03:20:54 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1,10 +1 @@
-Cause
+When converting guests that have RAW SPARSE virtual disks for output to Red Hat Enterprise Virtualization, virt-v2v could create OVF metadata that misstates the size of the disk image. These disks could be imported into Red Hat Enterprise Virtualization, but could not be used as the basis for a template. With this update, virt-v2v now records the correct size of RAW SPARSE virtual disks in the OVF metadata, and guests that have these disks can be imported and used as the basis for templates.-    What actions or circumstances induced the feature request.
-Consequence
-    What action was inhibited by the feature’s absence.
-Change
-    What was done to implement the feature.
-    Note: backported from upstream is not an explanation.
-Result
-    What now happens when the actions or circumstances above occur.
-    Note: this is not the same as the feature request was fulfilled.

Comment 17 Matthew Booth 2011-03-09 16:11:32 UTC
*** Bug 683473 has been marked as a duplicate of this bug. ***

Comment 18 Jose Angel de Bustos Perez 2011-03-15 17:13:10 UTC
Hi,

I've converted a RHEL vSphere VM to RHEV using virt-v2v using virt-v2v-0.6.3-4.el5 and I have the same problem. I can't import the VM.

I tried changing Sparse to Preallocated but the problem persists.

virt-v2v is running in RHEL 5.6 

[root@v2vrhel ~]# uname -a
Linux v2vrhel 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
[root@v2vrhel ~]#

Comment 19 Matthew Booth 2011-03-16 09:40:49 UTC
(In reply to comment #18)
> Hi,
> 
> I've converted a RHEL vSphere VM to RHEV using virt-v2v using
> virt-v2v-0.6.3-4.el5 and I have the same problem. I can't import the VM.
> 
> I tried changing Sparse to Preallocated but the problem persists.
> 
> virt-v2v is running in RHEL 5.6 
> 
> [root@v2vrhel ~]# uname -a
> Linux v2vrhel 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64
> x86_64 GNU/Linux
> [root@v2vrhel ~]#

Jose,

What error do you get trying to import? Also, what type of data center are you trying to import to (nfs/iscsi/fcp)?

Comment 20 Jose Angel de Bustos Perez 2011-03-16 15:51:14 UTC
Hi Matthew,

it is the same situation that comment #10.

The error I got was: "Cannot imported VM. The selected disk configuration is not supported."

The Datacenter is iSCSI.

I decided to install a RHEL 6 machine (actually a vSphere VM) to run virt-v2v and I got the same problem: "Cannot imported VM. The selected disk configuration is not supported." but this time the workaround provided by Matthew in comment #10 fixed the problem and I could import the VM (changing Sparse to Preallocated in OVF).

Thank you very much and I hope this bug will be fixed soon.

Comment 21 Matthew Booth 2011-03-16 16:18:10 UTC
(In reply to comment #20)
> I decided to install a RHEL 6 machine (actually a vSphere VM) to run virt-v2v
> and I got the same problem: "Cannot imported VM. The selected disk
> configuration is not supported." but this time the workaround provided by
> Matthew in comment #10 fixed the problem and I could import the VM (changing
> Sparse to Preallocated in OVF).
> 
> Thank you very much and I hope this bug will be fixed soon.

Good to hear you worked round it.

You'll be pleased to note that this bug is already fixed. It will be in the RHEL 6.1 and RHEL 5.7 updates.

Comment 23 Huang Wenlong 2011-04-11 08:21:42 UTC
verify this bug in rhel5 with 
virt-v2v-0.7.1-2.el5
libguestfs-1.2.7-1.el5.13
libvirt-0.8.2-15.el5_6.3
rhev-m 2.2

steps:
1. convert a guest from xen to rhev-m  and import it successfully
2. check the export_domain 
3. get the meta file and OVF file 
4. ls -hs imge file 

meta:
DOMAIN=59eca403-e2d9-47b2-b3c7-aa94ee977208
VOLTYPE=LEAF
CTIME=1302508651
FORMAT=RAW
IMAGE=cb6016c3-93a2-4405-9c39-27f47278fde2
DISKTYPE=1
PUUID=00000000-0000-0000-0000-000000000000
LEGALITY=LEGAL
MTIME=1302508651
POOL_UUID=00000000-0000-0000-0000-000000000000
SIZE=3072000
TYPE=SPARSE
DESCRIPTION=Exported by virt-v2v
EOF



OVF:

...    <Disk ovf:diskId="5d8163d9-252f-48b8-893e-3ee86acae7dc" ovf:size="2" ovf:actual_size="1" ovf:fileRef="cb6016c3-93a2-4405-9c39-27f47278fde2/5d8163d9-252f-48b8-893e-3ee86acae7dc" ovf:parentRef="" ovf:vm_snapshot_id="00000000-0000-0000-0000-000000000000" ovf:volume-format="RAW" ovf:volume-type="Sparse" ovf:format="http://en.wikipedia.org/wiki/Byte" ovf:disk-interface="VirtIO" ovf:boot="True"/></Section>

ls -hs imge: 

843M -rw-r--r-- 1 36 kvm 1.5G Apr 11 08:18 5d8163d9-252f-48b8-893e-3ee86acae7dc

configuration is correct

...

Comment 24 Gianluca Cecchi 2011-06-14 08:24:38 UTC
Hello, 
experimenting the same problem in 5.6 with virt-v2v provided by its "Virt V2V Tool" channel for rh el 5.
Using the workaround in ovf and meta files provided by comments above and changing the virt-v2v.conf file to include its configuration (see
https://bugzilla.redhat.com/show_bug.cgi?id=671353#c1), I am able to export a Windows XP SP3 vm that originally resides on a fedora 15 host (disk is raw on file).
But the import fails on a ISCSI datacenter:
Probably it is the implicit template creation phase by the import itself...?
I get in rhevm:
starting to import VM wxp_r815 to ISCSI1
and after 2 minutes:
Failed to complete copy of Template to Domain ISCSI1

Downloaded 5.7 beta but in RHN I don't find any sort of "Virt V2V Tool" beta to be applied to it and test the import correction....

Comment 25 Matthew Booth 2011-06-14 08:53:45 UTC
(In reply to comment #24)
> Hello, 
> experimenting the same problem in 5.6 with virt-v2v provided by its "Virt V2V
> Tool" channel for rh el 5.
> Using the workaround in ovf and meta files provided by comments above and
> changing the virt-v2v.conf file to include its configuration (see
> https://bugzilla.redhat.com/show_bug.cgi?id=671353#c1), I am able to export a
> Windows XP SP3 vm that originally resides on a fedora 15 host (disk is raw on
> file).
> But the import fails on a ISCSI datacenter:
> Probably it is the implicit template creation phase by the import itself...?
> I get in rhevm:
> starting to import VM wxp_r815 to ISCSI1
> and after 2 minutes:
> Failed to complete copy of Template to Domain ISCSI1
> 
> Downloaded 5.7 beta but in RHN I don't find any sort of "Virt V2V Tool" beta to
> be applied to it and test the import correction....

This bug is also fixed in RHEL 6.1, which is now released. There are actually many advantages to the version in RHEL 6 over the version in RHEL 5. I strongly recommend running that version if possible.

Comment 26 Gianluca Cecchi 2011-06-15 08:17:35 UTC
Trying in rh el 6.1 where I installed a winxp sp3 guest I get this:

virt-v2v -o rhev -osd 10.0.11.44:/RHEV/export --network rhevm winxprhel61
winxprhel61.img: 100% [==============================================]D 0h04m32s
Using CPU model "cpu64-rhel6"
No operating system could be detected inside this disk image.

This may be because the file is not a disk image, or is not a virtual machine
image, or because the OS type is not understood by virt-inspector.

If you feel this is an error, please file a bug report including as much
information about the disk image as possible.

RHEL 6 notice
-------------
libguestfs will return this error for Microsoft Windows guests if the
separate 'libguestfs-winsupport' package is not installed. If the
guest is running Microsoft Windows, please try again after installing
'libguestfs-winsupport'.

No libguestfs-winsupport on media... which channel does provide it?
Thanks,
Gianluca

Comment 27 Gianluca Cecchi 2011-06-15 08:19:27 UTC
Sorry, I mispelled the search... it is RHEL V2VWIN.

Comment 28 Matthew Booth 2011-06-15 08:31:04 UTC
(In reply to comment #26)
> Trying in rh el 6.1 where I installed a winxp sp3 guest I get this:
> 
> virt-v2v -o rhev -osd 10.0.11.44:/RHEV/export --network rhevm winxprhel61
> winxprhel61.img: 100% [==============================================]D
> 0h04m32s
> Using CPU model "cpu64-rhel6"
> No operating system could be detected inside this disk image.

Gianluca,

I'll email you directly as this no longer relates to this Bugzilla.

Comment 29 Gianluca Cecchi 2011-06-15 15:13:23 UTC
Hello,
just to note that for me virt-v2v-0.7.1-3.el6.x86_64.rpm as provided in rh el 6.1 doesn't resolve the ovf and meta problem.
I configured a rh el 6.1 as hypervisor and installed a winxp sp3 guest with 10Gb ide disk in raw format on one dir-based storage pool.

then
[root@rhel61 ~]# virt-v2v -o rhev -osd 10.0.11.44:/RHEV/export --network rhevm winxprhel61
winxprhel61.img: 100% [================================================================]D 0h04m13s
Using CPU model "cpu64-rhel6"
virt-v2v: WARNING: There is no virtio net driver available in the directory specified for this version of Windows. The guest will be configured with a rtl8139 network adapter, but no driver will be installed for it. If the rtl8139 driver is not already installed in the guest, you must install it manually after conversion.
virt-v2v: winxprhel61 configured with virtio storage only.

Going to rhev I get the error about 
"Cannot imported VM. The selected disk configuration is not supported." 

The export storage domain is /RHEV/export/
For a successful import I have to:
- put the export storage domain in maintenance
- change the .ovf and .meta putting inside them respectively:
TYPE=PREALLOCATED
and
ovf:volume-type="Preallocated"
- activate the export storage domain
- run the import phase again

What I have not understood is that:

a) on source kvm host I have
[root@rhel61 ~]# ls -l /data/vmstorage/winxprhel61.img 
-rw-------. 1 root root 10737418240 Jun 15 09:51 /data/vmstorage/winxprhel61.img

[root@rhel61 ~]# du -sh /data/vmstorage/winxprhel61.img 
1.2G	/data/vmstorage/winxprhel61.img

b) On export dir of nfs server I have:
# ll images/ee703f8b-0800-4aa0-9872-3796478cb1fd/
total 1321864
-rw-r--r-- 1 36 kvm 10737418240 Jun 15 10:37 ce24736d-28eb-43a7-ae0f-958598be2910
-rw-r--r-- 1 36 kvm         330 Jun 15 10:45 ce24736d-28eb-43a7-ae0f-958598be2910.meta

# du -sh images/ee703f8b-0800-4aa0-9872-3796478cb1fd/
1.3G	images/ee703f8b-0800-4aa0-9872-3796478cb1fd/

So it seems the disk is somehow sparse indeed, as the size is not 10Gb but 1.3Gb
I don't know what is the size of the imported LV ...

Can you clear if the bug is resolved or not?
No update for virt-v2v-0.7.1-3.el6.x86_64.rpm is available at the moment on rhn...

Thanks
 Gianluca

Comment 30 Matthew Booth 2011-06-15 15:41:24 UTC
(In reply to comment #29)
> Hello,
> just to note that for me virt-v2v-0.7.1-3.el6.x86_64.rpm as provided in rh el
> 6.1 doesn't resolve the ovf and meta problem.

You've actually hit a separate issue :) I believe we documented this for 6.1, although a quick search didn't throw up the BZ for me. Bug 696050 is related, but I'm not 100% sure it's the one I was looking for.

> [root@rhel61 ~]# virt-v2v -o rhev -osd 10.0.11.44:/RHEV/export --network rhevm
> winxprhel61
> winxprhel61.img: 100%
> [================================================================]D 0h04m13s
> Using CPU model "cpu64-rhel6"
> virt-v2v: WARNING: There is no virtio net driver available in the directory
> specified for this version of Windows. The guest will be configured with a
> rtl8139 network adapter, but no driver will be installed for it. If the rtl8139
> driver is not already installed in the guest, you must install it manually
> after conversion.
> virt-v2v: winxprhel61 configured with virtio storage only.
> 
> Going to rhev I get the error about 
> "Cannot imported VM. The selected disk configuration is not supported." 

RHEV displays this message when it doesn't support the combination of disk format and sparseness on your target *data* storage domain. In this instance, I suspect your data storage domain uses block storage. RHEV doesn't support raw/sparse on block storage, and isn't able convert during import.

Because your original guest uses raw/sparse, virt-v2v has copied this. If you want to maintain its sparseness when importing into RHEV, you can convert it to qcow2 during the conversion process with:

virt-v2v -of qcow2 -oa sparse ...

Or to write raw/preallocated:

virt-v2v -of raw -oa sparse ...

> The export storage domain is /RHEV/export/
> For a successful import I have to:
> - put the export storage domain in maintenance
> - change the .ovf and .meta putting inside them respectively:
> TYPE=PREALLOCATED
> and
> ovf:volume-type="Preallocated"
> - activate the export storage domain
> - run the import phase again
> 
> What I have not understood is that:
> 
> a) on source kvm host I have
> [root@rhel61 ~]# ls -l /data/vmstorage/winxprhel61.img 
> -rw-------. 1 root root 10737418240 Jun 15 09:51
> /data/vmstorage/winxprhel61.img
> 
> [root@rhel61 ~]# du -sh /data/vmstorage/winxprhel61.img 
> 1.2G /data/vmstorage/winxprhel61.img

Yup, your source is sparse.

> b) On export dir of nfs server I have:
> # ll images/ee703f8b-0800-4aa0-9872-3796478cb1fd/
> total 1321864
> -rw-r--r-- 1 36 kvm 10737418240 Jun 15 10:37
> ce24736d-28eb-43a7-ae0f-958598be2910
> -rw-r--r-- 1 36 kvm         330 Jun 15 10:45
> ce24736d-28eb-43a7-ae0f-958598be2910.meta
> 
> # du -sh images/ee703f8b-0800-4aa0-9872-3796478cb1fd/
> 1.3G images/ee703f8b-0800-4aa0-9872-3796478cb1fd/

And so is the target.

Matt

Comment 31 Gianluca Cecchi 2011-06-15 15:56:25 UTC
My storage domain is iSCSI, so qcow2 is not possible.
I think on iSCSI thin provision should be based on LV composed by 512Mb pieces extended as necessary (if I understood correctly the manual).

Probably you were saying that coming from qcow2 on file system and going to an iSCSI based storage domain target the command I should use is:

virt-v2v -o rhev -of raw -oa preallocated -osd 10.0.11.44:/RHEV/export --network rhevm
and get a successful import (and not use -oa sparse...)?

correct?

Comment 32 Matthew Booth 2011-06-15 16:10:29 UTC
(In reply to comment #31)
> My storage domain is iSCSI, so qcow2 is not possible.
> I think on iSCSI thin provision should be based on LV composed by 512Mb pieces
> extended as necessary (if I understood correctly the manual).
> 
> Probably you were saying that coming from qcow2 on file system and going to an
> iSCSI based storage domain target the command I should use is:
> 
> virt-v2v -o rhev -of raw -oa preallocated -osd 10.0.11.44:/RHEV/export
> --network rhevm
> and get a successful import (and not use -oa sparse...)?
> 
> correct?

Not quite. You *can* have sparse qcow2 on iscsi. You *can't* have sparse raw on iscsi. Your options are raw/preallocated and qcow2/sparse.

Comment 39 errata-xmlrpc 2011-08-05 17:38:36 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-1125.html


Note You need to log in before you can comment on or make changes to this bug.