RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1598440 - virt-v2v will hang at opening the overlay during conversion with libvirt-4.5.0-1
Summary: virt-v2v will hang at opening the overlay during conversion with libvirt-4.5.0-1
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.6
Hardware: x86_64
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Daniel Berrangé
QA Contact: mxie@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: TRACKER-bugs-affecting-libguestfs
TreeView+ depends on / blocked
 
Reported: 2018-07-05 13:29 UTC by mxie@redhat.com
Modified: 2018-10-30 09:58 UTC (History)
9 users (show)

Fixed In Version: libvirt-4.5.0-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 09:57:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
v2v-hang-opening-overlay.log (17.55 KB, text/plain)
2018-07-05 13:29 UTC, mxie@redhat.com
no flags Details
libvirt.log (37.84 KB, text/plain)
2018-07-06 08:14 UTC, Richard W.M. Jones
no flags Details
libvirtd.log (952.87 KB, text/plain)
2018-07-06 08:15 UTC, Richard W.M. Jones
no flags Details
guestfs-3ah8l6e1k43fmk2c.log (6.37 KB, text/plain)
2018-07-06 08:43 UTC, Richard W.M. Jones
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:3113 0 None None None 2018-10-30 09:58:30 UTC

Description mxie@redhat.com 2018-07-05 13:29:32 UTC
Created attachment 1456767 [details]
v2v-hang-opening-overlay.log

Description of problem:
virt-v2v will hang at opening the overlay during conversion with libvirt-4.5.0-1

Version-Release number of selected component (if applicable):
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-1.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1.Convert a guest by virt-v2v but the conversion stay at opening overlay for several hours
# virt-v2v avocado-vt-vm1 -o null
[   0.0] Opening the source -i libvirt avocado-vt-vm1
[   0.0] Creating an overlay to protect the source from being modified
[   0.1] Initializing the target -o null
[   0.1] Opening the overlay
^C


Actual results:
As above description

Expected results:
Virt-v2v conversion could be finished successfully

Additional info:
1.Can't reproduce the problem if downgrade libvirt to 4.4.0-1, so it is a regression bug on libvirt
libvirt-4.4.0-1.el7.x86_64
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64

Comment 3 tingting zheng 2018-07-06 02:43:00 UTC
Add virt-v2v developer Richard for further check.

Comment 4 Richard W.M. Jones 2018-07-06 07:29:47 UTC
I've noticed something similar happening in Fedora Rawhide
at the moment.  I didn't have time to look into it yet.

Try running:
  libguestfs-test-tool

It looks like a libvirt, qemu or kernel bug of some kind.

Comment 5 tingting zheng 2018-07-06 07:50:24 UTC
(In reply to Richard W.M. Jones from comment #4)
> I've noticed something similar happening in Fedora Rawhide
> at the moment.  I didn't have time to look into it yet.
> 
> Try running:
>   libguestfs-test-tool
> 
> It looks like a libvirt, qemu or kernel bug of some kind.

libguestfs-test-tool hangs at the same part when trying to launch libvirt guest.

# libguestfs-test-tool
     ************************************************************
     *                    IMPORTANT NOTICE
     *
     * When reporting bugs, include the COMPLETE, UNEDITED
     * output below in your bug report.
     *
     ************************************************************
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
XDG_RUNTIME_DIR=/run/user/0
SELinux: Enforcing
guestfs_get_append: (null)
guestfs_get_autosync: 1
guestfs_get_backend: libvirt
guestfs_get_backend_settings: []
guestfs_get_cachedir: /var/tmp
guestfs_get_hv: /usr/libexec/qemu-kvm
guestfs_get_memsize: 500
guestfs_get_network: 0
guestfs_get_path: /usr/lib64/guestfs
guestfs_get_pgroup: 0
guestfs_get_program: libguestfs-test-tool
guestfs_get_recovery_proc: 1
guestfs_get_smp: 1
guestfs_get_sockdir: /tmp
guestfs_get_tmpdir: /tmp
guestfs_get_trace: 0
guestfs_get_verbose: 1
host_cpu: x86_64
Launching appliance, timeout set to 600 seconds.
libguestfs: launch: program=libguestfs-test-tool
libguestfs: launch: version=1.38.2rhel=7,release=6.el7,libvirt
libguestfs: launch: backend registered: unix
libguestfs: launch: backend registered: uml
libguestfs: launch: backend registered: libvirt
libguestfs: launch: backend registered: direct
libguestfs: launch: backend=libvirt
libguestfs: launch: tmpdir=/tmp/libguestfsg4bVYg
libguestfs: launch: umask=0022
libguestfs: launch: euid=0
libguestfs: libvirt version = 4005000 (4.5.0)
libguestfs: guest random name = guestfs-d6zxdsa4pzpkvi7p
libguestfs: connect to libvirt
libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0
libguestfs: successfully opened libvirt handle: conn = 0x55a56abf7100
libguestfs: qemu version (reported by libvirt) = 2012000 (2.12.0)
libguestfs: get libvirt capabilities
libguestfs: parsing capabilities XML
libguestfs: build appliance
libguestfs: begin building supermin appliance
libguestfs: run supermin
libguestfs: command: run: /usr/bin/supermin5
libguestfs: command: run: \ --build
libguestfs: command: run: \ --verbose
libguestfs: command: run: \ --if-newer
libguestfs: command: run: \ --lock /var/tmp/.guestfs-0/lock
libguestfs: command: run: \ --copy-kernel
libguestfs: command: run: \ -f ext2
libguestfs: command: run: \ --host-cpu x86_64
libguestfs: command: run: \ /usr/lib64/guestfs/supermin.d
libguestfs: command: run: \ -o /var/tmp/.guestfs-0/appliance.d
supermin: version: 5.1.19
supermin: rpm: detected RPM version 4.11
supermin: package handler: fedora/rpm
supermin: acquiring lock on /var/tmp/.guestfs-0/lock
supermin: build: /usr/lib64/guestfs/supermin.d
supermin: reading the supermin appliance
supermin: build: visiting /usr/lib64/guestfs/supermin.d/base.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/daemon.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/excludefiles type uncompressed excludefiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/hostfiles type uncompressed hostfiles
supermin: build: visiting /usr/lib64/guestfs/supermin.d/init.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/packages type uncompressed packages
supermin: build: visiting /usr/lib64/guestfs/supermin.d/udev-rules.tar.gz type gzip base image (tar)
supermin: build: visiting /usr/lib64/guestfs/supermin.d/zz-winsupport.tar.gz type gzip base image (tar)
supermin: mapping package names to installed packages
supermin: resolving full list of package dependencies
supermin: build: 192 packages, including dependencies
supermin: build: 31629 files
supermin: build: 7572 files, after matching excludefiles
supermin: build: 7579 files, after adding hostfiles
supermin: build: 7573 files, after removing unreadable files
supermin: build: 7597 files, after munging
supermin: kernel: looking for kernel using environment variables ...
supermin: kernel: looking for kernels in /lib/modules/*/vmlinuz ...
supermin: kernel: looking for kernels in /boot ...
supermin: kernel: kernel version of /boot/vmlinuz-3.10.0-907.el7.x86_64 = 3.10.0-907.el7.x86_64 (from content)
supermin: kernel: picked modules path /lib/modules/3.10.0-907.el7.x86_64
supermin: kernel: kernel version of /boot/vmlinuz-0-rescue-039312dd01ca49a38c625180d4dc0610 = 3.10.0-907.el7.x86_64 (from content)
supermin: kernel: picked modules path /lib/modules/3.10.0-907.el7.x86_64
supermin: kernel: picked vmlinuz /boot/vmlinuz-3.10.0-907.el7.x86_64
supermin: kernel: kernel_version 3.10.0-907.el7.x86_64
supermin: kernel: modpath /lib/modules/3.10.0-907.el7.x86_64
supermin: ext2: creating empty ext2 filesystem '/var/tmp/.guestfs-0/appliance.d.ueoqupei/root'
supermin: ext2: populating from base image
supermin: ext2: copying files from host filesystem
supermin: ext2: copying kernel modules
supermin: ext2: creating minimal initrd '/var/tmp/.guestfs-0/appliance.d.ueoqupei/initrd'
supermin: ext2: wrote 31 modules to minimal initrd
supermin: renaming /var/tmp/.guestfs-0/appliance.d.ueoqupei to /var/tmp/.guestfs-0/appliance.d
libguestfs: finished building supermin appliance
libguestfs: command: run: qemu-img
libguestfs: command: run: \ create
libguestfs: command: run: \ -f qcow2
libguestfs: command: run: \ -o backing_file=/var/tmp/.guestfs-0/appliance.d/root,backing_fmt=raw
libguestfs: command: run: \ /tmp/libguestfsg4bVYg/overlay2.qcow2
Formatting '/tmp/libguestfsg4bVYg/overlay2.qcow2', fmt=qcow2 size=4294967296 backing_file=/var/tmp/.guestfs-0/appliance.d/root backing_fmt=raw cluster_size=65536 lazy_refcounts=off refcount_bits=16
libguestfs: create libvirt XML
libguestfs: libvirt XML:\n<?xml version="1.0"?>\n<domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">\n  <name>guestfs-d6zxdsa4pzpkvi7p</name>\n  <memory unit="MiB">500</memory>\n  <currentMemory unit="MiB">500</currentMemory>\n  <cpu mode="host-passthrough">\n    <model fallback="allow"/>\n  </cpu>\n  <vcpu>1</vcpu>\n  <clock offset="utc">\n    <timer name="rtc" tickpolicy="catchup"/>\n    <timer name="pit" tickpolicy="delay"/>\n    <timer name="hpet" present="no"/>\n  </clock>\n  <os>\n    <type>hvm</type>\n    <kernel>/var/tmp/.guestfs-0/appliance.d/kernel</kernel>\n    <initrd>/var/tmp/.guestfs-0/appliance.d/initrd</initrd>\n    <cmdline>panic=1 console=ttyS0 edd=off udevtimeout=6000 udev.event-timeout=6000 no_timer_check printk.time=1 cgroup_disable=memory usbcore.nousb cryptomgr.notests tsc=reliable 8250.nr_uarts=1 root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm</cmdline>\n    <bios useserial="yes"/>\n  </os>\n  <on_reboot>destroy</on_reboot>\n  <devices>\n    <rng model="virtio">\n      <backend model="random">/dev/urandom</backend>\n    </rng>\n    <controller type="scsi" index="0" model="virtio-scsi"/>\n    <disk device="disk" type="file">\n      <source file="/tmp/libguestfsg4bVYg/scratch1.img"/>\n      <target dev="sda" bus="scsi"/>\n      <driver name="qemu" type="raw" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="0" unit="0"/>\n    </disk>\n    <disk type="file" device="disk">\n      <source file="/tmp/libguestfsg4bVYg/overlay2.qcow2"/>\n      <target dev="sdb" bus="scsi"/>\n      <driver name="qemu" type="qcow2" cache="unsafe"/>\n      <address type="drive" controller="0" bus="0" target="1" unit="0"/>\n    </disk>\n    <serial type="unix">\n      <source mode="connect" path="/tmp/libguestfscd6xDK/console.sock"/>\n      <target port="0"/>\n    </serial>\n    <channel type="unix">\n      <source mode="connect" path="/tmp/libguestfscd6xDK/guestfsd.sock"/>\n      <target type="virtio" name="org.libguestfs.channel.0"/>\n    </channel>\n    <controller type="usb" model="none"/>\n    <memballoon model="none"/>\n  </devices>\n  <qemu:commandline>\n    <qemu:env name="TMPDIR" value="/var/tmp"/>\n  </qemu:commandline>\n</domain>\n
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -R
libguestfs: command: run: \ -Z /var/tmp/.guestfs-0
libguestfs: /var/tmp/.guestfs-0:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 appliance.d
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 lock
libguestfs: 
libguestfs: /var/tmp/.guestfs-0/appliance.d:
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 ..
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 initrd
libguestfs: -rwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 kernel
libguestfs: -rw-r--r--. root root unconfined_u:object_r:user_tmp_t:s0 root
libguestfs: command: run: ls
libguestfs: command: run: \ -a
libguestfs: command: run: \ -l
libguestfs: command: run: \ -Z /tmp/libguestfscd6xDK
libguestfs: drwxr-xr-x. root root unconfined_u:object_r:user_tmp_t:s0 .
libguestfs: drwxrwxrwt. root root system_u:object_r:tmp_t:s0       ..
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 console.sock
libguestfs: srw-rw----. root qemu unconfined_u:object_r:user_tmp_t:s0 guestfsd.sock
libguestfs: launch libvirt guest

Comment 6 Richard W.M. Jones 2018-07-06 08:06:21 UTC
I can reproduce this just by updating libvirt to 4.5.0-1.el7
(leaving everything else unchanged) so it is a libvirt bug.
It looks like the same thing I was seeing in Rawhide.

Comment 7 Richard W.M. Jones 2018-07-06 08:14:53 UTC
Created attachment 1456906 [details]
libvirt.log

libvirt.log (client side) during hang.

Comment 8 Richard W.M. Jones 2018-07-06 08:15:41 UTC
Created attachment 1456907 [details]
libvirtd.log

libvirtd (server side) log during the same hang.

Comment 9 Daniel Berrangé 2018-07-06 08:28:30 UTC
I can't reproduce it with F28 + libvirt 4.5.0, but not tried rawhide yet.

Could you provide the /var/log/libvirt/qemu/$GUESTNAME.log as it might have something useful. 

Also test if selinux permissive helps or not - if so is there an AVC

Comment 10 Richard W.M. Jones 2018-07-06 08:43:34 UTC
Created attachment 1456924 [details]
guestfs-3ah8l6e1k43fmk2c.log

qemu log for guest

Comment 11 Richard W.M. Jones 2018-07-06 08:45:48 UTC
SELinux is set to Enforcing.  Setting it to Permissive does not
appear to help, so I suppose it's not an SELinux problem.

Comment 12 Richard W.M. Jones 2018-07-06 08:56:09 UTC
Downgrading to qemu-kvm-rhev-2.10.0-21.el7_5.4 fixes
the problem, indicating that the common factor might be
libvirt 4.5.0 + qemu 2.12.

Comment 13 Daniel Berrangé 2018-07-06 10:11:49 UTC
I screwed up the chardev FD passing code and passed in a UNIX listener socket even for client mode chardevs, and libguestfs uses client mode. This patch ought to fix it

https://www.redhat.com/archives/libvir-list/2018-July/msg00341.html

Comment 16 mxie@redhat.com 2018-07-10 07:57:32 UTC
Verify the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64
virtio-win-1.9.4-2.el7.noarch


Steps:
1.Convert a guest from vmware to rhv4.2's data domain by virt-v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1  esx6.0-win2016-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt
[   0.3] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2016-x86_64
[   2.0] Creating an overlay to protect the source from being modified
[   3.0] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   4.6] Opening the overlay
[  25.6] Inspecting the overlay
[ 136.2] Checking for sufficient free disk space in the guest
[ 136.2] Estimating space required on target for each disk
[ 136.2] Converting Windows Server 2016 Standard to run on KVM
virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing.  
Firstboot scripts may conflict with PnP.
virt-v2v: warning: there is no QXL driver for this version of Windows (10.0 
x86_64).  virt-v2v looks for this driver in 
/usr/share/virtio-win/virtio-win.iso

The guest will be configured to use a basic VGA display driver.
virt-v2v: This guest has virtio drivers installed.
[ 166.6] Mapping filesystem data to avoid copying unused and blank areas
[ 168.7] Closing the overlay
[ 169.5] Checking if the guest needs BIOS or UEFI to boot
[ 169.5] Assigning disks to buses
[ 169.5] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.dW7QbF/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2118.2] Creating output metadata
[2139.4] Finishing off

2.Power on guest on rhv4.2 and checkpoints of guest are passed

3.Convert a guest from Xen to rhv4.2's export domain by virt-v2v
# virt-v2v -ic xen+ssh://root.3.21 xen-hvm-rhel7.5-x86_64 -o rhv -os 10.66.144.40:/home/nfs_export -of qcow2 -b ovirtmgmt
[   0.0] Opening the source -i libvirt -ic xen+ssh://root.3.21 xen-hvm-rhel7.5-x86_64
[   0.6] Creating an overlay to protect the source from being modified
[   5.3] Initializing the target -o rhv -os 10.66.144.40:/home/nfs_export
[   8.3] Opening the overlay
[  66.9] Inspecting the overlay
[ 117.5] Checking for sufficient free disk space in the guest
[ 117.5] Estimating space required on target for each disk
[ 117.5] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 367.3] Mapping filesystem data to avoid copying unused and blank areas
[ 370.6] Closing the overlay
[ 373.6] Checking if the guest needs BIOS or UEFI to boot
[ 373.6] Assigning disks to buses
[ 373.6] Copying disk 1/1 to /tmp/v2v.T0IcSJ/ea9cb06f-8bf9-4fc8-a247-478e754d898a/images/fc7ce3c2-a4e9-45bf-96a1-bda41ef05f24/a47c2e18-d198-48dd-94e2-e3f7341139a4 (qcow2)
    (100.00/100%)
[ 724.6] Creating output metadata
[ 724.8] Finishing off

4.Import guest from export domain to data domain on rhv4.2, power on guest and checkpoints of guest are passed


Result:
   Virt-v2v conversion could be finished successfully with libvirt-4.5.0-2, so move the bug from ON_QA to VERIFIED

Comment 18 errata-xmlrpc 2018-10-30 09:57:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:3113


Note You need to log in before you can comment on or make changes to this bug.