RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1611690 - part_to_dev "/dev/sdp1" returns "/dev/sd" instead of "/dev/sdp"
Summary: part_to_dev "/dev/sdp1" returns "/dev/sd" instead of "/dev/sdp"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.4
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On: 1551055
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-02 16:01 UTC by Koutuk Shukla
Modified: 2021-09-09 15:16 UTC (History)
6 users (show)

Fixed In Version: libguestfs-1.38.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 07:47:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4379101 0 None None None 2019-08-28 16:17:34 UTC
Red Hat Product Errata RHEA-2018:3021 0 None None None 2018-10-30 07:47:28 UTC

Description Koutuk Shukla 2018-08-02 16:01:29 UTC
Description of problem:

virt-v2v fails with error "virt-v2v: error: libguestfs error: part_get_parttype"

Version-Release number of selected component (if applicable):

rhevm-4.1.6.2-0.1.el7.noarch

Red Hat Virtualization Host 4.1 (el7.4) :
virt-v2v-1.36.3-6.el7_4.3.x86_64
libvirt-daemon-3.2.0-14.el7_4.3.x86_64
libguestfs-1.36.3-6.el7_4.3.x86_64

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
v2v conversion fails

Expected results:
v2v conversion should succeed without any issues.

Additional info:

Comment 4 Richard W.M. Jones 2018-08-09 09:28:33 UTC
Sorry I missed this bug because I was on holiday.  We had another
user reporting the same issue last week, and I absolutely cannot
find the BZ for that right now.  Still looking ...

Comment 5 Richard W.M. Jones 2018-08-09 09:39:19 UTC
I still cannot find it.  In any case this issue is fixed in RHEL 7.6
and libguestfs 1.38.

Comment 6 Richard W.M. Jones 2018-08-09 10:25:23 UTC
Reproducer (on 1.36.10-6.16.rhvpreview.el7ev):

$ guestfish scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 10M : run : part-disk /dev/sdp mbr : blockdev-getsize64 /dev/sdp1 : part-to-dev /dev/sdp1 
10355200
/dev/sd     <-- incorrect

Does not reproduce in 1.38:

$ guestfish scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 1M : scratch 10M : run : part-disk /dev/sdp mbr : blockdev-getsize64 /dev/sdp1 : part-to-dev /dev/sdp1 
10355200
/dev/sdp    <--- correct

So this will be fixed by upgrading to RHEL 7.6.

Comment 8 Pino Toscano 2018-08-09 10:36:55 UTC
Fixed by rebase.

Comment 10 mxie@redhat.com 2018-08-15 09:11:38 UTC
I can reproduce the bug with builds:
virt-v2v-1.36.3-6.el7_4.3.x86_64
libguestfs-1.36.3-6.el7_4.3.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-10.el7.x86_64

Steps to reproduce:
1.Prepare a rhel5 guest on ESXi6.7 host, add 19 additional disks to guest and make partition for additional disks (/dev/sdb1 to /dev/sdt1)

2.Convert the guest from vmware by virt-v2v, the conversion is failed with same error with bug
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel5.11-x64-bug1611690 --password-file /tmp/passwd -o null -v -x
....
libguestfs: trace: v2v: part_to_dev "/dev/sdp1"
guestfsd: main_loop: proc 214 (part_get_parttype) took 0.01 seconds
guestfsd: main_loop: new request, len 0x38
guestfsd: main_loop: proc 272 (part_to_dev) took 0.00 seconds
libguestfs: trace: v2v: part_to_dev = "/dev/sd"
libguestfs: trace: v2v: part_get_parttype "/dev/sd"
guestfsd: main_loop: new request, len 0x34
/dev/sd: No such file or directory
guestfsd: error: part_get_parttype_stub: /dev/sd: No such file or directory
guestfsd: main_loop: proc 214 (part_get_parttype) took 0.00 seconds
libguestfs: trace: v2v: part_get_parttype = NULL (error)
virt-v2v: error: libguestfs error: part_get_parttype: 
part_get_parttype_stub: /dev/sd: No such file or directory
rm -rf '/var/tmp/null.4KSt6z'
libguestfs: trace: v2v: close
libguestfs: closing guestfs handle 0x1791ae0 (state 2)
libguestfs: trace: v2v: internal_autosync
guestfsd: main_loop: new request, len 0x28
umount-all: /proc/mounts: fsname=rootfs dir=/ type=rootfs opts=rw freq=0 passno=0
umount-all: /proc/mounts: fsname=proc dir=/proc type=proc opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev/root dir=/ type=ext2 opts=rw,noatime freq=0 passno=0
umount-all: /proc/mounts: fsname=/proc dir=/proc type=proc opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=/sys dir=/sys type=sysfs opts=rw,relatime freq=0 passno=0
umount-all: /proc/mounts: fsname=tmpfs dir=/run type=tmpfs opts=rw,nosuid,relatime,size=399632k,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev dir=/dev type=devtmpfs opts=rw,relatime,size=995944k,nr_inodes=248986,mode=755 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev/pts dir=/dev/pts type=devpts opts=rw,relatime,mode=600,ptmxmode=000 freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev/mapper/VolGroup00-LogVol00 dir=/sysroot type=ext3 opts=rw,relatime,data=ordered freq=0 passno=0
umount-all: /proc/mounts: fsname=/dev/sda1 dir=/sysroot/boot type=ext3 opts=rw,relatime,data=ordered freq=0 passno=0
commandrvf: stdout=n stderr=y flags=0x0
commandrvf: umount /sysroot/boot
commandrvf: stdout=n stderr=y flags=0x0
commandrvf: umount /sysroot
fsync /dev/sda
fsync /dev/sdb
fsync /dev/sdc
fsync /dev/sdd
fsync /dev/sde
fsync /dev/sdf
fsync /dev/sdg
fsync /dev/sdh
fsync /dev/sdi
fsync /dev/sdj
fsync /dev/sdk
fsync /dev/sdl
fsync /dev/sdm
fsync /dev/sdn
fsync /dev/sdo
fsync /dev/sdp
fsync /dev/sdq
fsync /dev/sdr
fsync /dev/sds
fsync /dev/sdt
guestfsd: main_loop: proc 282 (internal_autosync) took 0.03 seconds
libguestfs: trace: v2v: internal_autosync = 0
libguestfs: sending SIGTERM to process 3513
libguestfs: qemu maxrss 324172K
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsBHdbE6
libguestfs: command: run: rm
libguestfs: command: run: \ -rf /tmp/libguestfsn9qbqD
libguestfs: trace: close
libguestfs: closing guestfs handle 0x1791860 (state 0)
libguestfs: trace: close
....


Verify the bug with builds:
virt-v2v-1.38.2-10.el7.x86_64
libguestfs-1.38.2-10.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-10.el7.x86_64

Steps:
1.Update virt-v2v on conversion server and convert the guest from vmware to rhv by v2v again, the conversion can be finished without error
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel5.11-x64-bug1611690 --password-file /tmp/passwd -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw
[   0.2] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel5.11-x64-bug1611690
[  25.6] Creating an overlay to protect the source from being modified
[  38.4] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[  58.7] Opening the overlay
[ 118.3] Inspecting the overlay
[ 188.4] Checking for sufficient free disk space in the guest
[ 188.4] Estimating space required on target for each disk
[ 188.4] Converting Red Hat Enterprise Linux Server release 5.11 (Tikanga) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1617.1] Mapping filesystem data to avoid copying unused and blank areas
[1627.1] Closing the overlay
[1627.3] Checking if the guest needs BIOS or UEFI to boot
[1627.3] Assigning disks to buses
[1627.3] Copying disk 1/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2464.8] Copying disk 2/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit2.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2594.2] Copying disk 3/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit3.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2720.6] Copying disk 4/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit4.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2845.6] Copying disk 5/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit5.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2959.5] Copying disk 6/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit6.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3090.5] Copying disk 7/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit7.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3233.1] Copying disk 8/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit8.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3372.6] Copying disk 9/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit9.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3524.1] Copying disk 10/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit10.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3648.5] Copying disk 11/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit11.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3794.4] Copying disk 12/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit12.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3930.4] Copying disk 13/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit13.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4059.2] Copying disk 14/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit14.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4169.0] Copying disk 15/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit15.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4287.1] Copying disk 16/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit16.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4403.8] Copying disk 17/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit17.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4526.2] Copying disk 18/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit18.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4645.5] Copying disk 19/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit19.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4763.8] Copying disk 20/20 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.N3iALh/nbdkit20.sock", "file.export": "/" } (raw)
    (100.00/100%)
[4888.7] Creating output metadata
[4907.2] Finishing off

2.Can power on guest on rhv4.2 successfully and checkpoints of guest are passed


According to above result, move the bug from ON_QA to VERIFIED

Comment 12 errata-xmlrpc 2018-10-30 07:47:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021


Note You need to log in before you can comment on or make changes to this bug.