RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1581810 - Particular LVM configuration causes virt-v2v conversion to fail
Summary: Particular LVM configuration causes virt-v2v conversion to fail
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 7.6
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-23 16:29 UTC by Mor
Modified: 2018-07-24 09:36 UTC (History)
11 users (show)

Fixed In Version: libguestfs-1.38.2-5.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-12 16:31:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
virt-v2v log (277.38 KB, text/plain)
2018-05-23 16:29 UTC, Mor
no flags Details
lvm dump (40.57 KB, application/octet-stream)
2018-05-23 19:59 UTC, Mor
no flags Details
partitions-info.png (196.14 KB, image/png)
2018-06-28 09:08 UTC, mxie@redhat.com
no flags Details

Description Mor 2018-05-23 16:29:16 UTC
Created attachment 1440682 [details]
virt-v2v log

Description of problem:
VM migration using virt-v2v from VMware to RHV fails with error: "virt-v2v: error: libguestfs error: internal_parse_mountable: internal_parse_mountable_stub: /dev/rhel_clone/root: No such file or directory".
It seems that virt-2v2 fails to detect the target file system under /dev/mapper/rhel_clone-root.

Source VM is running RHEL 7.5 and has two disks with LVM layout. 

See additional info for various commands output on the VM.

Version-Release number of selected component (if applicable):
virt-v2v 1.36.10rhel=7,release=6.10.rhvpreview.el7ev,libvirt

How reproducible:
100% with this VM.

Steps to Reproduce:
On RHV 4.2 conversion host run:
LIBGUESTFS_BACKEND=direct virt-v2v -ic vpx://administrator%40vsphere.local.69.92/Datacenter/cluster/<VMWARE_VM_ASSOCIATED_HOST>?no_verify=1 -it vddk -io vddk-libdir=/opt/vmware-vix-disklib-distrib -io vddk-thumbprint=<THUMBPRINT> <SOURCE_VM_NAME> --password-file ./vcenter-password -o rhv-upload -oc <RHV_ENGINE>/ovirt-engine/api -os Guy_SD -op ./ovirt-admin-password -on <TARGET_VM_NAME> -of raw -oo rhv-cafile=/etc/pki/vdsm/certs/cacert.pem -oo rhv-direct -oo rhv-cluster=L1_vms --bridge VM_Network -oa preallocated

Actual results:
Error: internal_parse_mountable_stub: /dev/rhel_clone/root: No such file or directory.

Expected results:
Should pass disk layout analysis.

Additional info:

[root@localhost ~]# fdisk -l
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00006678

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    33554431    15727616   8e  Linux LVM
/dev/sda3        33554432   209715199    88080384   8e  Linux LVM

Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00006678

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     2099199     1048576   83  Linux
/dev/sdb2         2099200    33554431    15727616   8e  Linux LVM
/dev/sdb3        33554432   209715199    88080384   8e  Linux LVM

Disk /dev/mapper/rhel-root: 103.5 GB, 103502839808 bytes, 202153984 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel_clone-swap: 1719 MB, 1719664640 bytes, 3358720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel_clone-root: 103.5 GB, 103502839808 bytes, 202153984 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@localhost ~]# lvm pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               rhel
  PV Size               <15.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              3839
  Free PE               0
  Allocated PE          3839
  PV UUID               DPhqpb-SfUi-1j69-Id4U-eD1D-ri6k-fLS7EA
   
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               rhel
  PV Size               84.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              21503
  Free PE               255
  Allocated PE          21248
  PV UUID               sfvcN8-qMfY-V6eq-tw0r-kcYb-TwQT-F3Mceh
   
  --- Physical volume ---
  PV Name               /dev/sdb2
  VG Name               rhel_clone
  PV Size               <15.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              3839
  Free PE               0
  Allocated PE          3839
  PV UUID               AelYfd-VqsN-jiCw-MWwe-al1d-wgtx-VNEqFd
   
  --- Physical volume ---
  PV Name               /dev/sdb3
  VG Name               rhel_clone
  PV Size               84.00 GiB / not usable 4.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              21503
  Free PE               255
  Allocated PE          21248
  PV UUID               HN30Cj-1DL9-c0D4-o51p-eGG7-4tMW-68KIT3
   
[root@localhost ~]# lvm lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                VcbGy8-cMwY-yGmt-eer3-pS4C-EJc1-7gbsNc
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-12 22:47:31 +0300
  LV Status              available
  # open                 2
  LV Size                1.60 GiB
  Current LE             410
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                JAFywO-jsOm-xa80-qoMU-eczt-Do47-WKCQzq
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-12 22:47:32 +0300
  LV Status              available
  # open                 1
  LV Size                96.39 GiB
  Current LE             24677
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/rhel_clone/swap
  LV Name                swap
  VG Name                rhel_clone
  LV UUID                VcbGy8-cMwY-yGmt-eer3-pS4C-EJc1-7gbsNc
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-12 22:47:31 +0300
  LV Status              available
  # open                 0
  LV Size                1.60 GiB
  Current LE             410
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/rhel_clone/root
  LV Name                root
  VG Name                rhel_clone
  LV UUID                JAFywO-jsOm-xa80-qoMU-eczt-Do47-WKCQzq
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-04-12 22:47:32 +0300
  LV Status              available
  # open                 1
  LV Size                96.39 GiB
  Current LE             24677
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Comment 2 Richard W.M. Jones 2018-05-23 17:51:13 UTC
As discussed in email this is some sort of LVM problem.

When the LVM PVs are probed initially we see warnings, and the
rhel_clone VG does not show up:

+ lvm pvs
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  WARNING: PV /dev/sdb2 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb2 might need repairing.
  WARNING: PV /dev/sdb3 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb3 might need repairing.
  PV         VG        Fmt  Attr PSize   PFree   
  /dev/sda2  rhel      lvm2 a--  <15.00g       0 
  /dev/sda3  rhel      lvm2 a--  <84.00g 1020.00m
  /dev/sdb2  [unknown] lvm2 u--  <15.00g       0 
  /dev/sdb3  [unknown] lvm2 u--   84.00g       0 
+ lvm vgs
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  VG   #PV #LV #SN Attr   VSize  VFree   
  rhel   2   2   0 wz--n- 98.99g 1020.00m
+ lvm lvs
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel -wi-a----- 96.39g                                                    
  swap rhel -wi-a-----  1.60g                                                    

However later on we run the command:

lvm lvs -o vg_name,lv_name -S "lv_role=public && lv_skip_activation!=yes" --no
headings --separator /

which returns:

  rhel/root
  rhel/swap
  rhel_clone/root
  rhel_clone/swap

This ends up causing inspection to get confused because the rhel_clone LVs
cannot actually be opened.

We weren't able to work out why this is happening, nor could we reproduce it.

Comment 5 Mor 2018-05-23 19:59:36 UTC
Created attachment 1440746 [details]
lvm dump

Comment 6 Richard W.M. Jones 2018-05-23 22:02:39 UTC
It's in the log file attached to the bug.

Comment 11 Richard W.M. Jones 2018-05-24 08:16:46 UTC
When the guest is being converted, is it shut down?  It's currently
running, but that might just be because you were examining the guest
for other reasons.

I will need to shut it down soon so I can continue my tests.

Comment 12 Mor 2018-05-24 08:20:39 UTC
Yes, the guest was shut down during the conversion. You can shut it down. As you mentioned, I powered it on to examine the disks.

Comment 14 Richard W.M. Jones 2018-05-24 09:15:16 UTC
The first time (during appliance/init) that pvs runs we see the warning:

+ lvm pvs
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  WARNING: PV /dev/sdb2 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb2 might need repairing.
  WARNING: PV /dev/sdb3 is marked in use but no VG was found using it.
  WARNING: PV /dev/sdb3 might need repairing.
  PV         VG        Fmt  Attr PSize   PFree   
  /dev/sda2  rhel      lvm2 a--  <15.00g       0 
  /dev/sda3  rhel      lvm2 a--  <84.00g 1020.00m
  /dev/sdb2  [unknown] lvm2 u--  <15.00g       0 
  /dev/sdb3  [unknown] lvm2 u--   84.00g       0 

But if I subsequently run lvm pvs from the guestfish command line
using the debug backdoor then we see the PVs correctly:

><fs> debug sh "lvm pvs"
  PV         VG         Fmt  Attr PSize   PFree   
  /dev/sda2  rhel       lvm2 a--  <15.00g       0 
  /dev/sda3  rhel       lvm2 a--  <84.00g 1020.00m
  /dev/sdb2  rhel_clone lvm2 a--  <15.00g       0 
  /dev/sdb3  rhel_clone lvm2 a--  <84.00g 1020.00m

However this is after lvmetad has been started (by the daemon).

Comment 15 Mor 2018-05-24 09:53:05 UTC
So LVM is valid and device scanning fails to detect the file system?

Comment 16 Richard W.M. Jones 2018-05-24 10:43:41 UTC
Preliminary patch posted:
https://www.redhat.com/archives/libguestfs/2018-May/thread.html#00113

Comment 17 Richard W.M. Jones 2018-05-24 14:02:00 UTC
v1 patch had a few problems, so I have posted v2:

https://www.redhat.com/archives/libguestfs/2018-May/msg00115.html

Comment 18 Richard W.M. Jones 2018-05-24 17:38:45 UTC
v2 patch is included in the libguestfs-1.36.10-6.11.rhvpreview.el7ev
package in this repository:

https://people.redhat.com/~rjones/virt-v2v-RHEL-7.5-rhv-preview/

Comment 19 Mor 2018-05-27 10:46:20 UTC
Richard, can you change the topic to fit the summary of the issue? 
As I understand, The problem is not a corrupted LVM.

Comment 20 Richard W.M. Jones 2018-05-27 15:51:12 UTC
I don't think we can really say whether or not the LVM is corrupted,
but I've changed the title.  I'd really like to find a reproducer
however, do you know how the rhel_clone VG was made?

Comment 21 Mor 2018-05-28 07:07:29 UTC
Yes, I've cloned the first vmdk disk using vmkfstools on vmware then I imported it on the guest using `vgimportclone` to adjust the UID's.

Comment 22 Richard W.M. Jones 2018-06-04 17:39:11 UTC
Did you have a chance to try the -6.11.rhvpreview.el7ev
package and how did that work out?

Comment 23 Mor 2018-06-10 15:16:14 UTC
Yes, it is working with the version from the preview repository.

Comment 24 Pino Toscano 2018-06-11 08:08:45 UTC
This was fixed with
https://github.com/libguestfs/libguestfs/commit/dd162d2cd56a2ecf4bcd40a7f463940eaac875b8
which is in libguestfs >= 1.39.6.

Comment 28 mxie@redhat.com 2018-06-28 09:07:36 UTC
Hi Mor,

  I am verifying the bug but I can't reproduce the problem. I logged into your vSphere client and check the partition status of the guest "V2V_RHEL_7.5_200GB_PG" as screenshot"partitions-info", partitions are so strange that lv "rhel-root" of /dev/sda2 and /dev/sda3 are both mounted on "/" ,could you please give me some steps about how to create this /dev/sda3 ?

Comment 29 mxie@redhat.com 2018-06-28 09:08:08 UTC
Created attachment 1455214 [details]
partitions-info.png

Comment 30 Mor 2018-06-28 09:29:50 UTC
Hi Mxie,

As we discussed, you were looking at the wrong machine. Let me know if you need anything else to reproduce it. Currently I am able to reproduce it 100%.

Comment 31 Mor 2018-06-28 10:26:07 UTC
'Error: internal_parse_mountable_stub: /dev/rhel_clone/root: No such file or directory.' is fixed. 

Verified on:
libguestfs-1.36.10-6.14.rhvpreview.el7ev.x86_64
virt-v2v-1.36.10-6.14.rhvpreview.el7ev.x86_64
RHV 4.2.4.5-0.1.el7_3
RHEL 7.5

With exception:
VM fails to run successfully on RHV after conversion.

Comment 32 Pino Toscano 2018-06-28 10:33:15 UTC
(In reply to Mor from comment #31)
> 'Error: internal_parse_mountable_stub: /dev/rhel_clone/root: No such file or
> directory.' is fixed. 
> 
> Verified on:
> libguestfs-1.36.10-6.14.rhvpreview.el7ev.x86_64
> virt-v2v-1.36.10-6.14.rhvpreview.el7ev.x86_64
> RHV 4.2.4.5-0.1.el7_3
> RHEL 7.5
> 
> With exception:
> VM fails to run successfully on RHV after conversion.

This is a RHEL bug, so it must be verified with RHEL packages, not from extra repositories/channels.

Comment 33 Mor 2018-06-28 11:54:28 UTC
We currently run only on RHEL 7.5, I don't have the option to check on RHEL 7.6.

Comment 34 mxie@redhat.com 2018-07-06 07:21:06 UTC
Try to reproduce the bug with builds:
libguestfs.x86_64 1:1.36.10-6.10.rhvpreview.el7ev                             
virt-v2v.x86_64 1:1.36.10-6.10.rhvpreview.el7ev  

Reproduce steps:
1.Because I can't create a guest name like bug's guest which has strange partitions, so I have to clone bug's guest to reproduce the bug,

2.Clone guest's vmdk disk using vmkfstools on vmware host
#vmkfstools -i mxie-clone-guest.vmdk mxie-clone-guest-copy.vmdk -d thin -a buslogic
Option --adaptertype is deprecated and hence will be ignored
Destination disk format: VMFS thin-provisioned
Cloning disk 'mxie-clone-guest.vmdk'...
Clone: 100% done.

3.Import clone vmdk disk to guest on vsphere client

4.Use `vgimportclone` to adjust clone PV's UUID in guest 
# vgimportclone --basevgname rhel-clone /dev/sdb2 /dev/sdb3

5.Check the pv and lv on guest which are same with bug:

6.Power off guest and convert guest to rhv4.2 by virt-v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.69.92/Datacenter/cluster/b02-h27-r620.rhev.openstack.engineering.redhat.com?no_verify=1 mxie-clone-guest -o rhv -os 10.66.144.40:/home/nfs_export --password-file /tmp/bug
[   0.0] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.69.92/Datacenter/cluster/b02-h27-r620.rhev.openstack.engineering.redhat.com?no_verify=1 mxie-clone-guest
[  20.9] Creating an overlay to protect the source from being modified
[  27.8] Initializing the target -o rhv -os 10.66.144.40:/home/nfs_export
[  28.2] Opening the overlay
[ 131.9] Inspecting the overlay

***
Dual- or multi-boot operating system detected.  Choose the root filesystem
that contains the main operating system from the list below:

 [1] /dev/rhel/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))
 [2] /dev/rhel_clone/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))

Enter a number between 1 and 2, or 'exit': 2
[1147.3] Checking for sufficient free disk space in the guest
[1147.3] Estimating space required on target for each disk
[1147.3] Converting Red Hat Enterprise Linux Workstation 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[6172.4] Mapping filesystem data to avoid copying unused and blank areas
virt-v2v: warning: fstrim on guest filesystem /dev/rhel_clone/root failed.  
Usually you can ignore this message.  To find out more read "Trimming" in 
virt-v2v(1).

Original message: fstrim: fstrim: /sysroot/: FITRIM ioctl failed: 
Input/output error
[6367.7] Closing the overlay
[6368.3] Checking if the guest needs BIOS or UEFI to boot
[6368.3] Assigning disks to buses
[6368.3] Copying disk 1/2 to /tmp/v2v.7fLJkA/ea9cb06f-8bf9-4fc8-a247-478e754d898a/images/c5b8bd90-cece-4926-957d-6a9cbf43b85d/c9d79055-04ef-4beb-83ae-1fc96b9faad7 (raw)
^C  (0.00/100%)



Hi Mor,
  I still can't reproduce the bug, could you please convert my guest "mxie-clone-guest" on your env? And could you please tell me your libvirt, qemu-kvm version of your env? Thanks

Comment 35 Mor 2018-07-08 08:00:53 UTC
Hello Mxie,

The error message is not reproducible with recent libguestfs and virt-v2v preview versions:
libguestfs-1.36.10-6.15.rhvpreview.el7ev.x86_64
virt-v2v-1.36.10-6.15.rhvpreview.el7ev.x86_64

What are you trying to reproduce exactly, the error message or failure to run the VM?

Comment 36 mxie@redhat.com 2018-07-09 03:36:27 UTC
Hi Mor,

   Could you please downgrade virt-v2v and libguestfs to 1.36.10-6.10.rhvpreview.el7ev in your environment to check if could reproduce the bug? I want to confirm if my guest "mxie-clone-guest " is created correctly to reproduce the bug, thanks!

Comment 39 mxie@redhat.com 2018-07-12 16:31:40 UTC
Try to reproduce the bug with builds:
virt-v2v-1.36.10-6.10.rhvpreview.el7ev.x86_64
libguestfs-1.36.10-6.10.rhvpreview.el7ev.x86_64
nbdkit-plugin-python-common-1.2.2-1.el7ev.x86_64
nbdkit-1.2.2-1.el7ev.x86_64
nbdkit-plugin-python2-1.2.2-1.el7ev.x86_64


Reproduce steps:
1.Prepare a guest which has same pvs and lvs with bug's guest, export the guest to ova file and convert guest from ova to rhv4.2's data domain by virt-v2v
#  virt-v2v -i ova bug1581810-guest-ova  -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt
[   0.3] Opening the source -i ova bug1581810-guest-ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
virt-v2v: warning: ova disk has an unknown VMware controller type (20), 
please report this as a bug supplying the *.ovf file extracted from the ova
[1036.8] Creating an overlay to protect the source from being modified
[1040.3] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[1048.0] Opening the overlay
[1290.8] Inspecting the overlay

***
Dual- or multi-boot operating system detected.  Choose the root filesystem
that contains the main operating system from the list below:

 [1] /dev/rhel/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))
 [2] /dev/rhel_clone/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))

Enter a number between 1 and 2, or 'exit': 1
[2908.3] Checking for sufficient free disk space in the guest
[2908.3] Estimating space required on target for each disk
[2908.3] Converting Red Hat Enterprise Linux Workstation 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[3099.7] Mapping filesystem data to avoid copying unused and blank areas
[3101.6] Closing the overlay
[3103.8] Checking if the guest needs BIOS or UEFI to boot
[3103.8] Assigning disks to buses
[3103.8] Copying disk 1/2 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.kD7QMe/nbdkit0.sock", "file.export": "/" } (raw)
^Cnbdkit: python[1]: error: write reply: Broken pipe


Result:
   Can't reproduce the bug


Try to reproduce the bug with builds:
virt-v2v-1.38.1-1.el7.x86_64
libguestfs-1.38.1-1.el7.x86_64

Reproduce steps:
1.Convert guest from above same ova to rhv4.2's data domain by virt-v2v
# virt-v2v -i ova bug1581810-guest-ova -o null
[   0.0] Opening the source -i ova bug1581810-guest-ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
[1069.8] Creating an overlay to protect the source from being modified
[1071.0] Initializing the target -o null
[1071.0] Opening the overlay
[1080.4] Inspecting the overlay

***
Dual- or multi-boot operating system detected.  Choose the root filesystem
that contains the main operating system from the list below:

 [1] /dev/rhel/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))
 [2] /dev/rhel_clone/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))

Enter a number between 1 and 2, or ‘exit’: 2
[1165.6] Checking for sufficient free disk space in the guest
[1165.6] Estimating space required on target for each disk
[1165.6] Converting Red Hat Enterprise Linux Workstation 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1353.1] Mapping filesystem data to avoid copying unused and blank areas
[1355.4] Closing the overlay
[1357.9] Checking if the guest needs BIOS or UEFI to boot
[1357.9] Assigning disks to buses
[1357.9] Copying disk 1/2 to qemu URI json:{ "file.driver": "null-co", "file.size": "1E" } (raw)
    (100.00/100%)
[1999.5] Copying disk 2/2 to qemu URI json:{ "file.driver": "null-co", "file.size": "1E" } (raw)
^C  (16.00/100%)


Result:
   Can't reproduce the bug


Verify the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64


Steps:
1.Convert guest from above same ova to rhv4.2's data domain by virt-v2v
# virt-v2v -i ova bug1581810-guest-ova -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt 
[   0.3] Opening the source -i ova bug1581810-guest-ova
virt-v2v: warning: making OVA directory public readable to work around 
libvirt bug https://bugzilla.redhat.com/1045069
[1146.8] Creating an overlay to protect the source from being modified
[1148.4] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[1154.6] Opening the overlay
[1168.8] Inspecting the overlay

***
Dual- or multi-boot operating system detected.  Choose the root filesystem
that contains the main operating system from the list below:

 [1] /dev/rhel/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))
 [2] /dev/rhel_clone/root (Red Hat Enterprise Linux Workstation 7.5 (Maipo))

Enter a number between 1 and 2, or ‘exit’: 2
[2638.0] Checking for sufficient free disk space in the guest
[2638.0] Estimating space required on target for each disk
[2638.0] Converting Red Hat Enterprise Linux Workstation 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[2862.6] Mapping filesystem data to avoid copying unused and blank areas
[2866.2] Closing the overlay
[2868.4] Checking if the guest needs BIOS or UEFI to boot
[2868.4] Assigning disks to buses
[2868.4] Copying disk 1/2 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.m7TYQx/nbdkit0.sock", "file.export": "/" } (raw)
^C  (0.00/100%)

Result:
  Can't reproduce the bug with latest virt-v2v, so close the bug as current release


Note You need to log in before you can comment on or make changes to this bug.