Bug 1863331 - [v2v][VMware to CNV VM import] Import using ceph-rbd/Block fail on virt-v2v error cannot create raw file: /data/vm/disk1/disk.img: Not a directory
Summary: [v2v][VMware to CNV VM import] Import using ceph-rbd/Block fail on virt-v2v e...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Console Kubevirt Plugin
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.6.0
Assignee: Brett Thurber
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On:
Blocks: 1863329 1874786
TreeView+ depends on / blocked
 
Reported: 2020-08-03 15:29 UTC by Ilanit Stein
Modified: 2020-10-27 16:22 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1874786 (view as bug list)
Environment:
Last Closed: 2020-10-27 16:22:34 UTC
Target Upstream Version:
Embargoed:
tgolembi: needinfo-


Attachments (Terms of Use)
v2v-conversion.log (124.43 KB, application/octet-stream)
2020-08-03 15:30 UTC, Ilanit Stein
no flags Details
reproduce-bug1863331.log (800.76 KB, text/plain)
2020-08-12 11:18 UTC, mxie@redhat.com
no flags Details
v2v-conversion-full.log (1.95 MB, text/plain)
2020-09-03 13:30 UTC, Ilanit Stein
no flags Details
v2v-pod-oc-describe (4.18 KB, text/plain)
2020-09-07 06:28 UTC, Ilanit Stein
no flags Details
v2v-pod-oc-json (19.05 KB, text/plain)
2020-09-07 06:30 UTC, Ilanit Stein
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ManageIQ manageiq-v2v-conversion_host-build pull 23 0 None closed Remove device duplication 2020-12-14 06:13:05 UTC
Github openshift console pull 6544 0 None closed Bug 1863331: fix devicePath when using block mode 2020-12-14 06:13:07 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:22:57 UTC

Description Ilanit Stein 2020-08-03 15:29:19 UTC
Description of problem:
VM import from VMware to CNV
For 
VM disk: Ceph-rbd / Block
v2v-conversion-template disk: Ceph-rbd / Filesystem

Import process seem to start, but it eventually failed -

"virt-v2v error: libguestfs error: cannot create raw file: /data/vm/disk1/disk.img: Not a directory"
(Full conversion pod virt-v2v log attached)

Version-Release number of selected component (if applicable):
CNV-2.4

How reproducible:
100%

Comment 1 Ilanit Stein 2020-08-03 15:30:00 UTC
Created attachment 1703660 [details]
v2v-conversion.log

Comment 4 Richard W.M. Jones 2020-08-03 19:50:03 UTC
Log is a bit scrambled but the error seems to be:

libguestfs: trace: disk_create "/data/vm/disk1/disk.img" "raw" 10737418240 "preallocation:sparse"
libguestfs: trace: disk_create = -1 (error)
virt-v2v: error: libguestfs error: cannot create raw file: /data/vm/disk1/disk.img: Not a directory

The disk_create API when the format is raw basically attempts to create
a file and truncate it to the right size.  The code that hits the
error is here:

https://github.com/libguestfs/libguestfs/blob/2469b4b790d81154f650f33b49f56c155504a8e2/lib/create.c#L179

That open call must be returning ENOTDIR, and open(2) tells me:

       ENOTDIR
              A  component  used as a directory in pathname is not, in fact, a
              directory, or O_DIRECTORY was specified and pathname was  not  a
              directory.

We're not using O_DIRECTORY, so it must mean that one of /data, /data/vm or
/data/disk1 is not a directory (but it may be something else - a regular file?)

Is this using overlayfs?  We've seen weird non-POSIX behaviour with overlayfs
before now ...

> AFAIK virt-v2v doesn't case what storage is being used for the VM disk as-long-as it can write to said storage (block or filesystem).

Yeah basically it can deal with output either to a directory or to
a block device (see the lib/create.c file above), but the directory
that is to contain the output must already exist.

Comment 5 Richard W.M. Jones 2020-08-03 19:51:21 UTC
(In reply to Richard W.M. Jones from comment #4)
> We're not using O_DIRECTORY, so it must mean that one of /data, /data/vm or
> /data/disk1 is not a directory (but it may be something else - a regular

/data/vm/disk1

Comment 6 Brett Thurber 2020-08-04 14:02:51 UTC
@Ilanit, can you create a VM on this cluster vs. migrating a VM?  Curious if there is a storage config issues here.

Comment 7 Alexander Wels 2020-08-06 18:59:50 UTC
So it trying to create a disk.img seems to indicate to me that the conversion process thinks its using a file system, while I think that /data/vm/disk1 is a block device, and thus it can't create the disk.img file

Comment 8 Richard W.M. Jones 2020-08-06 20:40:20 UTC
Oh I see.  v2v to a block device is fine (although you have to size
the block device correctly in advance, so it can be tricky to actually
do this).  However you do need to make sure you give it the true name
of the block device!  Unfortunately we don't have the complete virt-v2v
log so I can't tell what the command line parameters were.

Here's a simple example:

$ virt-builder fedora-32
$ ll -h fedora-32.img 
-rw-r--r--. 1 rjones rjones 6.0G Aug  6 21:29 fedora-32.img

# Create a block device of the right size:

$ sudo lvcreate -L 6G -n test /dev/fedora
  Logical volume "test" created.
$ ll /dev/fedora/test 
lrwxrwxrwx. 1 root root 7 Aug  6 21:32 /dev/fedora/test -> ../dm-4
$ ll /dev/dm-4
brw-rw----. 1 root disk 253, 4 Aug  6 21:32 /dev/dm-4

# Create block device in the expected output location.  Note you
# cannot use a symlink here, you have to create the block device
# directly (maybe a bug in virt-v2v?):

$ sudo mknod /var/tmp/out/disk1 b 253 4

# Convert to the block device.  We have to run virt-v2v as root in this
# case because otherwise it cannot open the block device.

$ sudo virt-v2v -i disk fedora-32.img -o json -os /var/tmp/out/ -oo 'json-disks-pattern=disk%{DiskNo}'

# Check the block device contains the converted guest:

$ sudo virt-inspector /dev/dm-4 --no-applications 
<?xml version="1.0"?>
<operatingsystems>
  <operatingsystem>
    <root>/dev/sda4</root>
    <name>linux</name>
    <arch>x86_64</arch>
    <distro>fedora</distro>
    <product_name>Fedora 32 (Thirty Two)</product_name>
[etc]

Comment 10 Richard W.M. Jones 2020-08-11 07:32:12 UTC
mxie: Do we have any Ceph instances we can use to test virt-v2v in the
scenario similar to comment 8?

What we'd like to try is whether virt-v2v can be used to do a conversion
to a Ceph block device.

Comment 13 mxie@redhat.com 2020-08-12 11:17:35 UTC
Hi Richard,

  I can't reproduce the bug when I convert a guest with ceph block disk, but I can reproduce the bug if create a file which has same name with target directory to replace target directory during v2v conversion, maybe the bug is caused by the sudden disappearance or instability of the target directory, details pls check below and v2v debug log"reproduce-bug1863331.log"


Packages:
virt-v2v-1.42.0-5.module+el8.3.0+7152+ab3787c3.x86_64
libguestfs-1.42.0-2.module+el8.3.0+6798+ad6e66be.x86_64

Scenraio1: Convert a guest with ceph block disk by v2v

1.1 Prepare a guest which os is installed on ceph block disk
# virsh dumpxml rhel7-ceph-block-disk |grep rbd -A 3 -B 2
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/rbd0'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

1.2 Convert the guest by v2v, conversion can finish without error
# virt-v2v rhel7-ceph-block-disk -o local -os /mnt/cephfs/ -oa sparse -of raw
[   0.0] Opening the source -i libvirt rhel7-ceph-block-disk
[   0.0] Creating an overlay to protect the source from being modified
[   0.2] Opening the overlay
[   6.0] Inspecting the overlay
[  29.8] Checking for sufficient free disk space in the guest
[  29.8] Estimating space required on target for each disk
[  29.8] Converting Red Hat Enterprise Linux Server 7.8 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 114.0] Mapping filesystem data to avoid copying unused and blank areas
[ 115.4] Closing the overlay
[ 115.7] Assigning disks to buses
[ 115.7] Checking if the guest needs BIOS or UEFI to boot
[ 115.7] Initializing the target -o local -os /mnt/cephfs/
[ 115.7] Copying disk 1/1 to /mnt/cephfs/rhel7-ceph-block-disk-sda (raw)
    (100.00/100%)
[ 492.4] Creating output metadata
[ 492.4] Finishing off


Sceanrio2: create a file which has same name with target directory during v2v conversion
2.1 Create directory /home/disk1
#mkdir /home/disk1

2.2 Convert a guest to /home/disk1 by v2v, delete directory '/home/disk1' during v2v converting guest OS and create a file "disk1" in /home

#rm -rf /home/disk1

#vi /home/disk1

# virt-v2v rhel7-ceph-block-disk -o local -os /home/disk1 -oa sparse -of raw
[   0.0] Opening the source -i libvirt rhel7-ceph-block-disk
[   0.0] Creating an overlay to protect the source from being modified
[   0.2] Opening the overlay
[   5.7] Inspecting the overlay
[  26.0] Checking for sufficient free disk space in the guest
[  26.0] Estimating space required on target for each disk
[  26.0] Converting Red Hat Enterprise Linux Server 7.8 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 114.5] Mapping filesystem data to avoid copying unused and blank areas
[ 115.7] Closing the overlay
[ 116.0] Assigning disks to buses
[ 116.0] Checking if the guest needs BIOS or UEFI to boot
[ 116.0] Initializing the target -o local -os /home/disk1
[ 116.0] Copying disk 1/1 to /home/disk1/rhel7-ceph-block-disk-sda (raw)
virt-v2v: error: libguestfs error: cannot create raw file: 
/home/disk1/rhel7-ceph-block-disk-sda: Not a directory

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Comment 14 mxie@redhat.com 2020-08-12 11:18:46 UTC
Created attachment 1711176 [details]
reproduce-bug1863331.log

Comment 15 mxie@redhat.com 2020-08-12 11:34:13 UTC
sorry for clearing the needinfo of comment2 and comment6 accidentally, add needinfo of istein back

Comment 16 mxie@redhat.com 2020-08-13 02:08:19 UTC
Add two scenarios about v2v convert guest to ceph storage, thanks for rjones'help 

Scenario1: convert a guest when ceph block device is target

1.1 Map a ceph storage as block device on v2v server

#qemu-img info rbd:libvirt-pool/qcow2.img:id=admin:key=xxxxxxxxx==:auth_supported=cephx:mon_host=10.66.xx.xx
image: json:{"driver": "raw", "file": {"pool": "libvirt-pool", "image": "qcow2.img", "driver": "rbd", "user": "admin"}}
file format: raw
virtual size: 10 GiB (10737418240 bytes)
disk size: unavailable
cluster_size: 4194304

# rbd map qcow2.img --pool libvirt-pool /dev/rbd0

# ls -l /dev/rbd0
brw-rw----. 1 root root 252, 0 Aug 12 20:08 /dev/rbd0

#  mkdir /home/ceph-block-disk-1

#  mknod /home/ceph-block-disk-1/disk1 b 252 0

# ls -l /home/ceph-block-disk-1/disk1
brw-r--r--. 1 root root 252, 0 Aug 12 21:48 /home/ceph-block-disk-1/disk1

1.2 Convert a guest from disk by v2v 

# virt-v2v -i disk xen-hvm-rhel6.9-x86_64.img -o json -os /home/ceph-block-disk-1 -oo 'json-disks-pattern=disk%{DiskNo}'
[   0.0] Opening the source -i disk xen-hvm-rhel6.9-x86_64.img
[   0.0] Creating an overlay to protect the source from being modified
[   0.2] Opening the overlay
[   6.0] Inspecting the overlay
[  22.9] Checking for sufficient free disk space in the guest
[  22.9] Estimating space required on target for each disk
[  22.9] Converting Red Hat Enterprise Linux Server release 6.9 (Santiago) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 144.2] Mapping filesystem data to avoid copying unused and blank areas
[ 145.2] Closing the overlay
[ 145.6] Assigning disks to buses
[ 145.6] Checking if the guest needs BIOS or UEFI to boot
[ 145.6] Initializing the target -o json -os /home/ceph-block-disk-1
[ 145.6] Copying disk 1/1 to /home/ceph-block-disk-1/disk1 (raw)
    (100.00/100%)
[ 662.2] Creating output metadata
[ 662.2] Finishing off


# ls -l /home/ceph-block-disk-1/disk1
brw-r--r--. 1 root root 252, 0 Aug 12 21:53 /home/ceph-block-disk-1/disk1

# file /home/xen-hvm-rhel6.9-x86_64.img 
/home/xen-hvm-rhel6.9-x86_64.img: DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fe, GRUB version 0.94

# file -bsL /home/ceph-block-disk-1/disk1 
DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fe, GRUB version 0.94


Scenario2: v2v can convert guest to the directory which is mounted on a ceph storage

# mount -t ceph 10.66.xx.xx:6789:/ /mnt/cephfs/ -o name=admin,secret=Axxxxxxxxxxxxxxxxxxxxxxxxx==

# virt-v2v -i disk xen-hvm-rhel6.9-x86_64.img -o json -os /mnt/cephfs
[   0.0] Opening the source -i disk xen-hvm-rhel6.9-x86_64.img
[   0.0] Creating an overlay to protect the source from being modified
[   0.2] Opening the overlay
[   5.8] Inspecting the overlay
[  21.7] Checking for sufficient free disk space in the guest
[  21.7] Estimating space required on target for each disk
[  21.7] Converting Red Hat Enterprise Linux Server release 6.9 (Santiago) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 136.9] Mapping filesystem data to avoid copying unused and blank areas
[ 137.8] Closing the overlay
[ 138.1] Assigning disks to buses
[ 138.1] Checking if the guest needs BIOS or UEFI to boot
[ 138.1] Initializing the target -o json -os /mnt/cephfs
[ 138.1] Copying disk 1/1 to /mnt/cephfs/xen-hvm-rhel6.9-x86_64-sda (raw)
    (100.00/100%)
[ 574.0] Creating output metadata
[ 574.0] Finishing off

Comment 18 mxie@redhat.com 2020-08-13 09:08:22 UTC
Convert a guest from VMware by v2v and  use ceph block device as target, because size of the ceph block device is about 10G and disk size of VMware guest > 10G, so the v2v conversion is failed with error "qemu-img: output file is smaller than input file"

#  mknod /home/ceph-block-disk-1/disk1 b 252 0

#  ls -l /home/ceph-block-disk-1/disk1
brw-r--r--. 1 root root 252, 0 Aug 13 16:58 /home/ceph-block-disk-1/disk1

# virt-v2v  -ic esx://root.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=DE:EA:A1:09:27:38:48:89:48:CB:95:A6:2B:5C:00:F3:53:13:25:44  esx6.7-rhel8.2-x86_64 -ip /home/esxpw -o json -os /home/ceph-block-disk-1 -oo 'json-disks-pattern=disk%{DiskNo}'
[   0.0] Opening the source -i libvirt -ic esx://root.75.219/?no_verify=1 esx6.7-rhel8.2-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=DE:EA:A1:09:27:38:48:89:48:CB:95:A6:2B:5C:00:F3:53:13:25:44
[   1.4] Creating an overlay to protect the source from being modified
[   2.2] Opening the overlay
[   8.7] Inspecting the overlay
[  34.1] Checking for sufficient free disk space in the guest
[  34.1] Estimating space required on target for each disk
[  34.1] Converting Red Hat Enterprise Linux 8.2 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 185.9] Mapping filesystem data to avoid copying unused and blank areas
[ 187.5] Closing the overlay
[ 187.8] Assigning disks to buses
[ 187.8] Checking if the guest needs BIOS or UEFI to boot
[ 187.8] Initializing the target -o json -os /home/ceph-block-disk-1
[ 187.8] Copying disk 1/1 to /home/ceph-block-disk-1/disk1 (raw)
qemu-img: output file is smaller than input file

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Comment 19 Richard W.M. Jones 2020-08-13 09:17:36 UTC
We don't have enough storage on the ceph server to create a block device
bigger than 10G, and we can't find any guests smaller than this on the
VMware server, so it's difficult to do a full test from VMware through
to Ceph block device.

Nevertheless it shouldn't make a difference what the input side is,
and the original bug report also has nothing to do with any of this.

Outputting to Ceph is possible, but you have to set things up in advance
correctly as outlined in comment 8, and the reason for the error is
because you didn't do this.

Comment 20 Richard W.M. Jones 2020-08-13 14:01:30 UTC
While testing this mxie found a bug where it sometimes overwrites
the block device node (ie. replaces it with a local file).  Since
that bug isn't related to this one it's got a new number:
https://bugzilla.redhat.com/show_bug.cgi?id=1868690

Comment 21 Tomáš Golembiovský 2020-09-01 16:18:19 UTC
Please attach complete pod log. This one is truncated and I don't see the initial lines from entrypoint there.

Comment 24 Ilanit Stein 2020-09-03 13:30:36 UTC
Created attachment 1713624 [details]
v2v-conversion-full.log

Comment 25 Tomáš Golembiovský 2020-09-03 13:48:27 UTC
As I expected the block devices were not detected and hence so symlinks were created.

Tomas, could one of your guys please figure out what has changed in the POD definition and why are the block devices no longer presented as `/dev/v2v-disk<number>`?

Comment 26 Tomáš Golembiovský 2020-09-03 13:48:52 UTC
(In reply to Tomáš Golembiovský from comment #25)
> As I expected the block devices were not detected and hence so symlinks were
> created.

"no symlinks"

Comment 27 Ilanit Stein 2020-09-07 06:28:48 UTC
Created attachment 1713918 [details]
v2v-pod-oc-describe

Comment 28 Ilanit Stein 2020-09-07 06:29:27 UTC
Comment on attachment 1713918 [details]
v2v-pod-oc-describe

$ oc describe pod kubevirt-v2v-conversion-mini-rhel7-cloudinit-rk22c

Comment 29 Ilanit Stein 2020-09-07 06:30:39 UTC
Created attachment 1713919 [details]
v2v-pod-oc-json

$ oc get -o json pod kubevirt-v2v-conversion-mini-rhel7-cloudinit-rk22c

Comment 30 Tomáš Golembiovský 2020-09-07 10:26:48 UTC
In the POD definition there is:

                "volumeDevices": [
                    {
                        "devicePath": "/data/vm/disk1",
                        "name": "harddisk1"
                    }

This is wrong. The path should be "/data/vm/disk1/disk.img".

Comment 31 Fabien Dupont 2020-09-07 14:24:42 UTC
What I find interesting here is that the only occurrence where "devicePath" is modified is at prefillVmStateUpdate.js#L122 where it is set to "/dev/v2v-disk${idx + 1}".
So, I'm wondering how it becomes "/data/vm/disk${idx + 1}". In the same file, the "mountPath" is set to "/data/vm/disk${idx + 1}". Would it be possible that if both mountPath and devicePath are set for a block Volume Device, the mountPath has precedence?

Comment 32 Ilanit Stein 2020-09-07 17:55:16 UTC
Clarifyingthat the tested flow is as described in this bug description:
VM import from VMware to CNV
For 
VM disk: Ceph-rbd / __Block__
v2v-conversion-template disk: Ceph-rbd / __Filesystem__

Do we want to test this flow or have 
v2v-conversion-template disk: Ceph-rbd / with Block?

Comment 33 Filip Krepinsky 2020-09-07 18:55:05 UTC
> This is wrong. The path should be "/data/vm/disk1/disk.img".

@Tomas nice, that fixed it. I managed to do the import with ceph/block after applying this fix. 

Also, can something be done with the support for temp storage with block mode?

Comment 34 Filip Krepinsky 2020-09-08 14:26:12 UTC
omit my question: temp storage should be resolved by https://bugzilla.redhat.com/show_bug.cgi?id=1814611

Comment 37 Fabien Dupont 2020-09-09 08:36:58 UTC
When testing, please make sure that you're using a build of kubevirt-v2v-conversion container image that includes https://github.com/ManageIQ/manageiq-v2v-conversion_host-build/pull/23.

Comment 43 Ilanit Stein 2020-10-01 10:25:11 UTC
Verified on CNV-2.5 (osbs deployment from Sep 30 2020).

RHEL7 VM import from VMware to CNV, using Ceph-RBD / Block is successful.
v2v conversion template disk storage class pick is no longer exposed.

Comment 45 errata-xmlrpc 2020-10-27 16:22:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.