RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1608299 - In -o rhv-upload, -os option doesn't work if two storage domains with same name in different datacenters
Summary: In -o rhv-upload, -os option doesn't work if two storage domains with same na...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-25 09:30 UTC by Richard W.M. Jones
Modified: 2018-10-30 07:47 UTC (History)
7 users (show)

Fixed In Version: libguestfs-1.38.2-10.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 07:47:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
virt-v2v-1.38.2-10.log (1.83 MB, text/plain)
2018-08-02 10:11 UTC, mxie@redhat.com
no flags Details
guest-data2.png (66.89 KB, image/png)
2018-08-05 12:36 UTC, mxie@redhat.com
no flags Details
rhv-upload-data2.log (1.49 MB, text/plain)
2018-08-05 12:39 UTC, mxie@redhat.com
no flags Details
host2-data-v2v_6-ndkit_4-4.log (2.03 MB, text/plain)
2018-08-07 04:14 UTC, mxie@redhat.com
no flags Details
host2-data-v2v_10-ndkit_6-1.log (1.87 MB, text/plain)
2018-08-07 09:28 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3021 0 None None None 2018-10-30 07:47:28 UTC

Description Richard W.M. Jones 2018-07-25 09:30:22 UTC
Description of problem:

(Reported by Nir Soffer & Daniel Erez)

If you have two storage domains with the same name, but in different
data centers, then the ‘-os storage_domain’ option may not find the
right host, resulting in unspecified problems.

Version-Release number of selected component (if applicable):

virt-v2v-1.36.10-6.16.rhvpreview.el7ev.x86_64

Steps to Reproduce:
1. Set up a RHV instance with two data centers.
2. Add two storage domains with the same name, one in each DC.
3. Use virt-v2v ... -o rhv-upload -os storage_domain

Actual results:

Unclear.  I don't think there's an actual error, but you should
see the following in the debug output:

  cannot find a running host with hw_id=.., that belongs to datacenter .., using any host

Additional info:

Should be fixed by:

https://github.com/libguestfs/libguestfs/commit/2547df8a0de46bb1447396e07ee0989bc3f8f31e
https://github.com/libguestfs/libguestfs/commit/23b62f391b098b74e2de6c2d2a911b8ef91543a2

Comment 2 Nir Soffer 2018-07-26 18:03:34 UTC
(In reply to Richard W.M. Jones from comment #0)
> Description of problem:

The problem is not using the local host for the transfer, and not using unix
socket.

We have 2 options:
1. transfer starting on local host, using HTTPS instead of unix socket
2. transfer started on another host, using HTTPS

In both case the upload will be slower, and use more cpu time on both virt-v2v
side and imageio.

In the second case, we spam the network with the image data, which is more 
problematic.

In both case the import should succeed, so this is a performance issue, not
a functional issue. 

> Steps to Reproduce:
> 1. Set up a RHV instance with two data centers.
> 2. Add two storage domains with the same name, one in each DC.

The issue is not storage domains with same name (RHV prevent this), but same 
prefix. For example, you have this setup:

-dc1
   sd
-dc2 
   sd2

virt-v2v was using this search:

    search="storage=sd"

This search is using regular expression instead of extract match, so both "sd"
and "sd2" are matched.

virt-v2v got back a list with both "dc1" and "dc2", and selected the first item
in the list.

If you are lucky and the host belongs to "dc1", everything works fine and the bug
is hidden. Otherwise virt-v2v will not use the best host.

> 3. Use virt-v2v ... -o rhv-upload -os storage_domain
> 
> Actual results:
> 
> you should see the following in the debug output:
> 
>   cannot find a running host with hw_id=.., that belongs to datacenter ..,
> using any host

And starting the transfer of another host, or on same host but not using unix
socket.

I hope this helps to make this more clear :-)

Comment 3 Nir Soffer 2018-07-26 18:09:21 UTC
Oh, and there is an easy workaround - don't use storage domain names with same
prefix.

This issue will not happen in this setup:

- dc1
    - sd1
    - sd2
- dc2
    - sd3
    - sd4

Comment 5 mxie@redhat.com 2018-08-02 10:11:23 UTC
Try to reproduce the bug with builds:
virt-v2v-1.38.2-8.el7.x86_64
libguestfs-1.38.2-8.el7.x86_64
nbdkit-plugin-python2-1.2.4-4.el7.x86_64
nbdkit-1.2.4-4.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
ovirt-imageio-daemon-1.4.2-0.el7ev.noarch
rhv:4.2.5-0.1.el7ev


Steps to reproduce:
1.Set up environment as below on rhv4.2:

DC1
  Host1 
    data
DC2 
  Host2 
    data2

2.Convert guest from vmware to data by v2v on Host2, the conversion can be finished without error

#virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x


3.Can find below info in v2v conversion log but can not find info"optimizing connection using unix socket '\x00/org/ovirt/imageio'
....
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
datacenter = DC1
cannot find a running host with hw_id='c59eef52-5ee5-11e6-8661-6c0b84a45d4a', that belongs to datacenter 'DC1', using any host
transfer.id = 'b7651803-b714-4930-bb42-65a83736ecae'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
....

4.
Result:
  Can reproduce the bug with virt-v2v-1.38.2-8.el7.x86_64


Verify the bug with builds:
virt-v2v-1.38.2-10.el7.x86_64
libguestfs-1.38.2-10.el7.x86_64
nbdkit-1.2.6-1.el7.x86_64
nbdkit-plugin-python2-1.2.6-1.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
ovirt-imageio-daemon-1.4.2-0.el7ev.noarch
rhv:4.2.5-0.1.el7ev


Steps:
1.Update virt-v2v on Host2 and convert guest from vmware to data by v2v again
#  virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x

2.After finish conversion, check v2v conversion log, but still can find below info and can not find info"optimizing connection using unix socket '\x00/org/ovirt/imageio' in log
....
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
datacenter = DC1
cannot find a running host with hw_id='c59eef52-5ee5-11e6-8661-6c0b84a45d4a', that belongs to datacenter 'DC1', using any host
transfer.id = '166d70c5-0f14-46d4-a6e6-c176e635e009'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
....

Hi rjones,

   Pls help to check the log "virt-v2v-1.38.2-10", seems the bug is not fixed, thanks

Comment 6 mxie@redhat.com 2018-08-02 10:11:43 UTC
Created attachment 1472646 [details]
virt-v2v-1.38.2-10.log

Comment 7 Nir Soffer 2018-08-02 10:35:00 UTC
(In reply to mxie from comment #5)
Thanks for the detailed report, I wish every verification had this kind of detail.

> 2.Convert guest from vmware to data by v2v on Host2, the conversion can be
> finished without error

On your setup:

DC1
  Host1 
    data
DC2 
  Host2 
    data2

Host1 can access only storage "data"
Host2 can access only storage "data2".

When virt-v2v detect the host, it will find that Host2 cannot be used for the
upload, and let engine start the upload on any host. engine will start the upload
on Host1 since it is the only host that can be used in this setup. Then virt-v2v
will upload the image to Host1 via https.

So you did not reproduce the issue in the first run, and the error message in the 
second run is expected.

The correct way to test is is to run on Host1. With broken virt-v2v, we expect
to to a wrong message about not finding the host. On the first version, we excpet
to find the host and upload to the local imageio daemon using unix socket.

Comment 8 mxie@redhat.com 2018-08-03 07:49:25 UTC
> So you did not reproduce the issue in the first run, and the error message
> in the second run is expected.

  I'm very confused now, in my understood, the bug want to fix the info "cannot find a running host with hw_id=.." in v2v debug log according to comment0 and v2v should use unix socket during converting after fixing according to comment2,am I right? If yes, I think I have reproduced this bug with virt-v2v-1.38.2-8.el7.x86_64. but the bug is not fixed with virt-v2v-1.38.2-10.el7.x86_64 because the problem is still existing


> With broken virt-v2v, we expect to to a wrong message about not finding the host.

   If v2v debug log has a wrong message about not finding the host is expected result, what is the real problem of the bug ? Could you please help to describe the detailed steps to reproduce?


> The correct way to test is is to run on Host1. 

I also don't understand this sentence, why must run test on host1?

(1) If convert guest on host1 to data with virt-v2v-1.38.2-7.el7, there is no wrong message about not finding the host and v2v will use unix socket during converting, so converting guest on Host1 to data has no problem, pls refer to below info:
....
hw_id = '4C4C4544-0030-4D10-804A-CAC04F485931'
datacenter = DC1
host.id = 'fc9d5f9a-b2ff-4a70-9801-33d495bdbadc'
transfer.id = '36884752-1933-444d-90b8-83afe91099e6'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
optimizing connection using unix socket '\x00/org/ovirt/imageio'
...


(2) If convert guest on host1 to data2 with virt-v2v-1.38.2-7.el7,this scenario has same result (pls refer to below info) with converting guest on host2 to data which is the scenario in comment5 to reproduce the bug
....
hw_id = '4C4C4544-0030-4D10-804A-CAC04F485931'
datacenter = DC2
cannot find a running host with hw_id='4C4C4544-0030-4D10-804A-CAC04F485931', that belongs to datacenter 'DC2', using any host
transfer.id = '3deb7d42-a6cc-4ec5-a2c9-b9fd4ee28081'
....

Comment 9 mxie@redhat.com 2018-08-05 12:35:29 UTC
I found another problem about this bug

Packages:
virt-v2v-1.38.2-10.el7.x86_64
libguestfs-1.38.2-10.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
nbdkit-1.2.6-1.el7.x86_64


Steps;
1.Set up environment as below on rhv4.2:

DC1
  Host1 
    data
DC2 
  Host2 
    data2

2.Convert guest to data by v2v using rhv-upload on conversion server no matter whether v2v conversion server is ovirt node or not, the conversion can be finished without error and checkpoints of guest are passed


3.Convert guest to data2 by v2v using rhv-upload on conversion server no matter wheter v2v conversion server is ovirt node or not, the conversion can be finished without error, but found the guest doesn't have disk after finishing conversion, pls refer to screenshot "guest-data2" and v2v log " rhv-upload-data2"

# virt-v2v rhel7.6  -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data2 -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw -oa preallocated -b ovirtmgmt
[   0.8] Opening the source -i libvirt rhel7.6
[   0.9] Creating an overlay to protect the source from being modified
[   1.4] Initializing the target -o rhv-upload -oa preallocated -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os data2
[   3.6] Opening the overlay
[  36.7] Inspecting the overlay
[  68.3] Checking for sufficient free disk space in the guest
[  68.3] Estimating space required on target for each disk
[  68.3] Converting Red Hat Enterprise Linux Server 7.6 Beta (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 168.5] Mapping filesystem data to avoid copying unused and blank areas
[ 169.4] Closing the overlay
[ 170.9] Checking if the guest needs BIOS or UEFI to boot
[ 170.9] Assigning disks to buses
[ 170.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.4BneBB/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[ 756.1] Creating output metadata
[ 781.0] Finishing off

Comment 10 mxie@redhat.com 2018-08-05 12:36:08 UTC
Created attachment 1473452 [details]
guest-data2.png

Comment 11 mxie@redhat.com 2018-08-05 12:39:13 UTC
Created attachment 1473453 [details]
rhv-upload-data2.log

Comment 12 Nir Soffer 2018-08-05 13:04:18 UTC
(In reply to mxie from comment #8)
> > So you did not reproduce the issue in the first run, and the error message
> > in the second run is expected.
> 
>   I'm very confused now, in my understood, the bug want to fix the info
> "cannot find a running host with hw_id=.." in v2v debug log according to
> comment0 and v2v should use unix socket during converting after fixing
> according to comment2,am I right? 

No, this message is not a bug, it is expected when you run on a host that cannot
access the selected storage domain.

Again in your setup:

DC1
  Host1 
    data
DC2 
  Host2 
    data2

Host1 can access only storage domain "data", since it belongs to data center "DC1"
Host2 can access only storage domain "data1", since it belongs to data center "DC2"

This is not a bug, but the design of the system.  All hosts in a DC can access
only storage that belongs to this DC.

So when you run virt-v2v on Host2, trying to upload a disk to storage "data", it
will check if the current host "Host2" belongs to the data center "DC1" and fail
with the message "cannot find a running host with hw_id=...". Then it will start
the transfer on any host, and engine will start the transfer on Host1. virt-v2v
will perform the upload using HTTPS to Host1.

When you run virt-v2v on Host1 it will check if Host1 belongs to DC1. The check
will succeed and then it will start the transfer on Host1, and perform the upload
via unix socket. You should see the message "optimizing connection using unix 
socket".

> (1) If convert guest on host1 to data with virt-v2v-1.38.2-7.el7, there is
> no wrong message about not finding the host and v2v will use unix socket
> during converting, so converting guest on Host1 to data has no problem, pls
> refer to below info:
> ....
> hw_id = '4C4C4544-0030-4D10-804A-CAC04F485931'
> datacenter = DC1
> host.id = 'fc9d5f9a-b2ff-4a70-9801-33d495bdbadc'
> transfer.id = '36884752-1933-444d-90b8-83afe91099e6'
> imageio features: flush=True trim=False zero=True
> unix_socket='\x00/org/ovirt/imageio'
> optimizing connection using unix socket '\x00/org/ovirt/imageio'
> ...

It looks like the broken virt-v2v version found DC1 by accident. It tries this
search:

    data_centers = system_service.data_centers_service().list(
        search='storage=%s' % storage_name,
        case_sensitive=False,
    )   

This search is using regex, and match both "data" and "data2". engine returns
both DC1 and DC2 in the results, and virt-v2v pick the first result. Looks like
DC1 was returned first, so it worked.

I'm not sure if there is any guarantee on the order of the results. Maybe it
depends on some internal order in the database. You can try to rename the storage
domain to:

DC1
  Host1
  data2
DC2
  Host2
  data

And in this case you need to use "data" as the storage domain, and run this on
Host2.

But if you cannot reproduce it even after the rename, maybe this is just hard
to reproduce, and we should not waste more time on reproducing.

> (2) If convert guest on host1 to data2 with virt-v2v-1.38.2-7.el7,this
> scenario has same result (pls refer to below info) with converting guest on
> host2 to data which is the scenario in comment5 to reproduce the bug
> ....
> hw_id = '4C4C4544-0030-4D10-804A-CAC04F485931'
> datacenter = DC2
> cannot find a running host with
> hw_id='4C4C4544-0030-4D10-804A-CAC04F485931', that belongs to datacenter
> 'DC2', using any host
> transfer.id = '3deb7d42-a6cc-4ec5-a2c9-b9fd4ee28081'
> ....

This cannot reproduce the bug since searching for "data2" does not match "data",
so you got the correct data center Dc2, and the expected warning about the host.

Comment 13 Nir Soffer 2018-08-05 13:06:52 UTC
(In reply to mxie from comment #9)
> I found another problem about this bug
This problem is not about this bug. Please file another bug.

Comment 14 mxie@redhat.com 2018-08-06 09:05:37 UTC
Try to reproduce the bug with below builds:
virt-v2v-1.36.10-6.15.rhvpreview.el7ev.x86_64
libguestfs-1.36.10-6.15.rhvpreview.el7ev.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
nbdkit-1.2.4-6.el7.x86_64


Steps to reproduce:
1.Set up environment as below on rhv4.2:

DC1
  Host1 
    data
DC2 
  Host2 
    data2

2.Convert guest from vmware to data by v2v on Host2

#virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x


3.The conversion is failed with below error in debug log
....
disk.id = '92edd658-2e51-4a49-a5eb-8fb2ab2bb92c'
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
transfer.id = 'f68352c9-07ac-4b15-a894-69bf71d1160f'
nbdkit: error: /var/tmp/rhvupload.SWh5eD/rhv-upload-plugin.py: open: error: direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt node.
nbdkit: debug: connection cleanup with final status -1
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.SWh5eD/nbdkit1.sock", "file.export": "/" }': Failed to read data: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors
....

Reproduce result:
   Can not start transfer on host2



Verify the bug with builds:
virt-v2v-1.38.2-10.el7.x86_64
libguestfs-1.38.2-10.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
nbdkit-1.2.4-6.el7.x86_64


Steps:
1.Update virt-v2v to latest version on Host2 and convert guest from vmware to data by v2v again

#virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x

2.Check v2v debug log
....
disk.id = '96c000c5-695b-4a80-96b9-ec04d6c175c0'
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
datacenter = DC1
cannot find a running host with hw_id='c59eef52-5ee5-11e6-8661-6c0b84a45d4a', that belongs to datacenter 'DC1', using any host
transfer.id = '2c2c2aef-4d3f-487e-adb4-228f41501288'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3
....


Verify result:
    v2v will find Host2 doesn't belong to the data center "DC1" and give  message "cannot find a running host with hw_id=...". Then it will start the transfer on any host, and engine will start the transfer on Host1. virt-v2v will perform the upload using HTTPS to Host1. 



Hi Nir,

  Could you please check if the error of reproduce step3 is this bug filed for?
  
  In verify result,according to what you said in comment12, v2v has fix the problem with virt-v2v-1.38.2-10.el7.x86_64, can I move the bug to verified?

Thanks

Comment 15 mxie@redhat.com 2018-08-07 04:13:02 UTC
Try to reproduce the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
nbdkit-1.2.4-4.el7.x86_64

Steps to reproduce:
1.Set up environment as below on rhv4.2:

DC1
  Host1 
    data2
DC2 
  Host2 
    data

2.Convert guest from vmware to data by v2v on Host2, the conversion could be finished successfully

# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win2008-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x |& tee >  host2-data-v2v_6-ndkit_4-4.log

3.Check v2v log 
# cat host2-data-v2v_6-ndkit_4-4.log |grep hw_id -A 6
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
transfer.id = '59917946-3e06-4a80-9304-869f7ef84c77'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
optimizing connection using unix socket '\x00/org/ovirt/imageio'
nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3
nbdkit: python[1]: debug: newstyle negotiation: client flags: 0x3

4.Guest will be list in DC1 but guest doesn't has disk because of bug1612653

Result:

  There is no info "cannot find a running host with hw_id=..., that belongs to datacenter 'DC1', using any host"


Hi Nir,
     
    Could you please help to check if I reproduce the bug this time? 
    And why guest will be converted to DC1 rather than DC2,is it normal?

Thanks

Comment 16 mxie@redhat.com 2018-08-07 04:14:05 UTC
Created attachment 1473830 [details]
host2-data-v2v_6-ndkit_4-4.log

Comment 17 mxie@redhat.com 2018-08-07 09:27:10 UTC
Verify the bug with builds:
virt-v2v-1.38.2-10.el7.x86_64
libguestfs-1.38.2-10.el7.x86_64
nbdkit-1.2.6-1.el7.x86_64

Steps:
1.Set up environment as below on rhv4.2:

DC1
  Host1 
    data2
DC2 
  Host2 
    data

2.Convert guest from vmware to data by v2v on Host2, the conversion could be finished successfully

# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win2008r2-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x |& tee >  host2-data-v2v_10-ndkit_6-1.log

3.Check v2v log 
# cat host2-data-v2v_10-ndkit_6-1.log |grep hw_id -A 6hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
datacenter = DC2
host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
transfer.id = 'e937c881-06e3-4c6d-bea4-d572ecd01775'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
optimizing connection using unix socket '\x00/org/ovirt/imageio'
nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3

4.Guest will be list in DC1 but guest doesn't has disk because of bug1612653


Result:
    There is no info "cannot find a running host with hw_id=..." in v2v debug log but v2v uses unix socket during converting guest

Comment 18 mxie@redhat.com 2018-08-07 09:28:03 UTC
Created attachment 1473912 [details]
host2-data-v2v_10-ndkit_6-1.log

Comment 19 Nir Soffer 2018-08-07 12:52:46 UTC
(In reply to mxie from comment #14)
> Try to reproduce the bug with below builds:
> virt-v2v-1.36.10-6.15.rhvpreview.el7ev.x86_64

This is a very old version, not relevant to this test.

> 3.The conversion is failed with below error in debug log
> ....
> disk.id = '92edd658-2e51-4a49-a5eb-8fb2ab2bb92c'
> hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
> host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
> transfer.id = 'f68352c9-07ac-4b15-a894-69bf71d1160f'
> nbdkit: error: /var/tmp/rhvupload.SWh5eD/rhv-upload-plugin.py: open: error:
> direct upload to host not supported, requires ovirt-engine >= 4.2 and only
> works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt
> node.

This error happens because engine could not find a host for the transfer, and
the plugin does not handle errors correctly.

> Verify the bug with builds:
> virt-v2v-1.38.2-10.el7.x86_64
> libguestfs-1.38.2-10.el7.x86_64
> libvirt-4.5.0-6.el7.x86_64
> qemu-kvm-rhev-2.12.0-9.el7.x86_64
> nbdkit-1.2.4-6.el7.x86_64
> 
> 
> Steps:
> 1.Update virt-v2v to latest version on Host2 and convert guest from vmware
> to data by v2v again
> 
> #virt-v2v -ic
> vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/
> ?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc
> https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data
> -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw
> --password-file /tmp/passwd -v -x
> 
> 2.Check v2v debug log
> ....
> disk.id = '96c000c5-695b-4a80-96b9-ec04d6c175c0'
> hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
> datacenter = DC1
> cannot find a running host with
> hw_id='c59eef52-5ee5-11e6-8661-6c0b84a45d4a', that belongs to datacenter
> 'DC1', using any host
> transfer.id = '2c2c2aef-4d3f-487e-adb4-228f41501288'
> imageio features: flush=True trim=False zero=True
> unix_socket='\x00/org/ovirt/imageio'
> nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3
> ....
> 
> 
> Verify result:
>     v2v will find Host2 doesn't belong to the data center "DC1" and give 
> message "cannot find a running host with hw_id=...". Then it will start the
> transfer on any host, and engine will start the transfer on Host1. virt-v2v
> will perform the upload using HTTPS to Host1. 

This looks right.

Comment 20 Nir Soffer 2018-08-07 12:57:05 UTC
(In reply to mxie from comment #15)
> Try to reproduce the bug with builds:
> virt-v2v-1.38.2-6.el7.x86_64
...
> optimizing connection using unix socket '\x00/org/ovirt/imageio'

Looks right - we uploaded to storage "data", belonging to DC DC2, so Host2 can
access this disk, and we optimized the upload using unix socket.

...
>   There is no info "cannot find a running host with hw_id=..., that belongs
> to datacenter 'DC1', using any host"

We don't except this error, because storage "data" belongs now to "DC2".

Comment 21 Nir Soffer 2018-08-07 15:26:26 UTC
(In reply to mxie from comment #17)
> # virt-v2v -ic
> vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/
> ?no_verify=1  esx6.7-win2008r2-x86_64 -o rhv-upload -oc
> https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data
> -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw
> --password-file /tmp/passwd -v -x |& tee >  host2-data-v2v_10-ndkit_6-1.log
...
> optimizing connection using unix socket '\x00/org/ovirt/imageio'
Looks right

...
>     There is no info "cannot find a running host with hw_id=..." in v2v
> debug log but v2v uses unix socket during converting guest

Looks verified to me, but we could not reproduce the issue. I think this is good
enough.

Comment 22 mxie@redhat.com 2018-08-08 11:18:32 UTC
Thanks Nir, but above comments are too messy and not clear, so make this summary to reproduce and verify the bug again


Try to reproduce the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
nbdkit-1.2.4-4.el7.x86_64

Steps to reproduce:
1.Set up environment as below on rhv4.2:

DC1
  Cluster:Default
    Host1 
      data2
DC2 
  Cluster:P2V
    Host2 
      data

Scenario1:
1.Convert guest from Host2 to data by virt-v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win8.1-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -oo rhv-cluster=P2V -of raw --password-file /tmp/passwd -v -x |& tee >  host2-data-cluster-p2v-reproduce.log

2.Check v2v debug log
# cat host2-data-cluster-p2v-reproduce.log |grep hw_id -A 6
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
transfer.id = 'd2eabe1a-9e82-4783-87f2-7a6a6271ee2b'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
optimizing connection using unix socket '\x00/org/ovirt/imageio'
nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3
nbdkit: python[1]: debug: newstyle negotiation: client flags: 0x3

3.Guest will be converted to DC2-data successfully and checkpionts of guest are passed except bug1609618

Reproduce result1:
    Virt-v2v didn't search data belongs to which datacenter, so there is no datacenter name in v2v debug log


Scenario2:
1.Convert guest from Host2 to data2 by v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win7-i386 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data2 -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x |& tee >  host2-data2-cluster-default-reproduce.log

2.Check v2v debug log
cat host2-data2-cluster-default-reproduce.log |grep hw_id -A 7
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
transfer.id = 'd0f7ba68-20ae-4340-8d3f-ec9b526b82a3'
nbdkit: error: /var/tmp/rhvupload.KDLi78/rhv-upload-plugin.py: open: error: direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt node.
nbdkit: debug: connection cleanup with final status -1
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.KDLi78/nbdkit1.sock", "file.export": "/" }': Failed to read data: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors


Reproduce result2:
    Virt-v2v didn't search data2 belongs to which datacenter,so there is no datacenter name in v2v debug log



Verify the bug with builds:
virt-v2v-1.38.2-10.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
nbdkit-1.2.6-1.el7.x86_64

Steps:
1.Set up environment as below on rhv4.2:

DC1
  Cluster:Default
    Host1 
      data2
DC2 
  Cluster:P2V
    Host2 
      data

Scenario1:
1.Convert guest from Host2 to data
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win2008r2-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -oo rhv-cluster=P2V -of raw --password-file /tmp/passwd -v -x |& tee >  host2-data-cluster-p2v.log

2.Check v2v debug log
# cat host2-data-cluster-p2v.log |grep hw_id -A 6
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
datacenter = DC2
host.id = '6d79c0ea-4747-4187-b748-db33ab11bce7'
transfer.id = '726aa9e1-1233-429b-b073-e479e802ba58'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
optimizing connection using unix socket '\x00/org/ovirt/imageio'
nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3

3.Guest will be converted to DC2/data successfully and checkpionts of guest are passed except bug1609618

Verify result1:
     Virt-v2v can find data belongs to datacenter "DC2" and shows datacenter name correctly in debug log 


Scenario2:
1.Convert guest from Host2 to data2 by v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win2008r2-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data2 -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd -v -x |& tee >  host2-data2-cluster-default.log

2.Check v2v debug log
# cat host2-data2-cluster-default.log |grep hw_id -A 6
hw_id = 'c59eef52-5ee5-11e6-8661-6c0b84a45d4a'
datacenter = DC1
cannot find a running host with hw_id='c59eef52-5ee5-11e6-8661-6c0b84a45d4a', that belongs to datacenter 'DC1', using any host
transfer.id = '5a9dd568-8baf-41f9-8f5b-8e03559af200'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'
nbdkit: python[1]: debug: newstyle negotiation: flags: global 0x3
nbdkit: python[1]: debug: newstyle negotiation: client flags: 0x3
nbdkit: python[1]: debug: newstyle negotiation: advertising export '/'
nbdkit: python[1]: debug: newstyle negotiation: client requested export '/' (ignored)

3.Guest will be converted to DC1/data2 successfully and checkpionts of guest are passed

Verfiy result2:
      v2v can find data2 belongs to datacenter "DC1" and shows datacenter name correctly in debug log 




Hi rjones, 

    According to above result, v2v will search data domain belongs to which datacenter and show datacenter name correctly in debug log during rhv-upload converting on ovirt node now

    But if start rhv-upload conversion on server which is not ovirt node, v2v will not show dataceter name in log, is it normal?

  1.# virt-v2v rhel7.6 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engi/api -os data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw -oa preallocated -oo rhv-cluster=P2V -b ovirtmgmt -v -x |& tee > data-p2v.log
  2.# cat data-p2v.log |grep hw_id -A 6
     nothing
  3.# cat data-p2v.log |grep datacenter
     nothing

Comment 23 Richard W.M. Jones 2018-08-09 10:15:25 UTC
Yes this is normal.

The find_host function
https://github.com/libguestfs/libguestfs/blob/71b588bab0f52d9701ba97aa7bc76217796fe556/v2v/rhv-upload-plugin.py#L60
tries to find if we are running on an ovirt node, and if so
it tries to see if we can use an optimization to access imageio
over a Unix domain socket.

If this fails (which is completely normal) then it falls back to
accessing imageio over a remote TCP socket.  It's a bit slower
but should still work.

As a side effect of looking for the ovirt node, host, etc. it
prints some debugging information about the hw_id, datacenter
etc.  If we're not running on an ovirt node then the debugging
information does not get printed.

Comment 24 mxie@redhat.com 2018-08-09 10:49:29 UTC
Thanks rjones, according to comment22 and comment23, move the bug from ON_QA to VERIFIED

Comment 26 errata-xmlrpc 2018-10-30 07:47:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021


Note You need to log in before you can comment on or make changes to this bug.