Bug 1426573 - VM import failed from oVirt (SSH connection issue)
Summary: VM import failed from oVirt (SSH connection issue)
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: vdsm
Classification: oVirt
Component: Core
Version: ---
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Tomas Jelinek
QA Contact: Pavel Stehlik
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-24 11:08 UTC by Udayendu Kar
Modified: 2019-04-04 12:44 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-05 10:37:37 UTC
oVirt Team: Virt
Embargoed:


Attachments (Terms of Use)
vdsm log when vm import failed. (5.11 MB, application/x-bzip)
2017-03-09 11:37 UTC, Udayendu Kar
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1303548 0 unspecified CLOSED [RFE] Add ability to import RHEL Xen guest images directly into oVirt 2021-02-22 00:41:40 UTC

Description Udayendu Kar 2017-02-24 11:08:47 UTC
Description of problem:
VM import failed from oVirt GUI with the below error message in KVM host with the VM to be imported:

---
journal: vdsm root ERROR error connection to hypervisor: 'Cannot recv data: Host key verification failed.: Connection reset by peer'
---

And below error message in the ovirt engine log:

---
HostName = XXXXXX, GetVmsFromExternalProviderParameters:{runAsync='true', hostId='3652c313-b3f1-4856-8440-0b1365be249a', url='qemu+ssh://root.xxx.xxx/system', username='root', originType='KVM'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetVmsFromExternalProviderVDS, error = Cannot recv data: Host key verification failed.: Connection reset by peer, code = 65
---

Version-Release number of selected component (if applicable):
From the oVirt Node based host:
# rpm -qa | egrep 'qemu|libvirt|vdsm'
vdsm-cli-4.18.11-1.el7.centos.noarch
libvirt-daemon-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64
qemu-img-ev-2.3.0-31.el7.16.1.x86_64
vdsm-api-4.18.11-1.el7.centos.noarch
libvirt-daemon-kvm-1.2.17-13.el7_2.5.x86_64
vdsm-hook-ethtool-options-4.18.11-1.el7.centos.noarch
libvirt-python-1.2.17-2.el7.x86_64
vdsm-hook-openstacknet-4.18.11-1.el7.centos.noarch
vdsm-yajsonrpc-4.18.11-1.el7.centos.noarch
vdsm-xmlrpc-4.18.11-1.el7.centos.noarch
vdsm-jsonrpc-4.18.11-1.el7.centos.noarch
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.5.x86_64
qemu-kvm-common-ev-2.3.0-31.el7.16.1.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.5.x86_64
vdsm-infra-4.18.11-1.el7.centos.noarch
libvirt-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64
qemu-kvm-ev-2.3.0-31.el7.16.1.x86_64
vdsm-hook-vmfex-dev-4.18.11-1.el7.centos.noarch
vdsm-gluster-4.18.11-1.el7.centos.noarch
qemu-kvm-tools-ev-2.3.0-31.el7.16.1.x86_64
ipxe-roms-qemu-20130517-8.gitc4bce43.el7_2.1.noarch
libvirt-client-1.2.17-13.el7_2.5.x86_64
vdsm-python-4.18.11-1.el7.centos.noarch
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64
vdsm-4.18.11-1.el7.centos.x86_64
vdsm-hook-fcoe-4.18.11-1.el7.centos.noarch

From CentOS based standalone KVM host: [VM is running here using virt-manager tool]
# rpm -qa | egrep 'libvirt|qemu|kvm'
libvirt-daemon-driver-network-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.4.x86_64
libvirt-gconfig-0.1.9-1.el7.x86_64
libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.4.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-client-2.0.0-10.el7_3.4.x86_64
qemu-img-1.5.3-126.el7_3.3.x86_64
libvirt-daemon-driver-interface-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-lxc-2.0.0-10.el7_3.4.x86_64
qemu-kvm-1.5.3-126.el7_3.3.x86_64
libvirt-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-storage-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-driver-secret-2.0.0-10.el7_3.4.x86_64
libvirt-python-2.0.0-2.el7.x86_64
libvirt-daemon-kvm-2.0.0-10.el7_3.4.x86_64
qemu-guest-agent-2.3.0-4.el7.x86_64
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.4.x86_64
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
libvirt-gobject-0.1.9-1.el7.x86_64
libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.4.x86_64
libvirt-daemon-config-network-2.0.0-10.el7_3.4.x86_64
qemu-kvm-common-1.5.3-126.el7_3.3.x86_64

From oVirt Manager:
# rpm -qa | egrep 'ovirt'
ovirt-engine-setup-4.0.5.5-1.el7.centos.noarch
ovirt-engine-userportal-4.0.5.5-1.el7.centos.noarch
ovirt-imageio-common-0.4.0-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.1-1.el7.noarch
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-imageio-proxy-setup-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
ovirt-engine-dashboard-1.0.5-1.el7.centos.noarch
ovirt-engine-tools-4.0.5.5-1.el7.centos.noarch
ovirt-release40-4.0.5-2.noarch
ovirt-setup-lib-1.0.2-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
ovirt-engine-wildfly-10.1.0-1.el7.x86_64
python-ovirt-engine-sdk4-4.0.2-1.el7.centos.x86_64
ovirt-engine-lib-4.0.5.5-1.el7.centos.noarch
ovirt-engine-cli-3.6.8.1-1.el7.centos.noarch
ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
ovirt-host-deploy-java-1.5.3-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.5.5-1.el7.centos.noarch
ovirt-engine-dwh-4.0.5-1.el7.centos.noarch
ovirt-imageio-proxy-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.5.5-1.el7.centos.noarch
ovirt-iso-uploader-4.0.2-1.el7.centos.noarch
ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
ovirt-engine-webadmin-portal-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.0.5.5-1.el7.centos.noarch
ovirt-engine-backend-4.0.5.5-1.el7.centos.noarch
ovirt-engine-4.0.5.5-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
ovirt-host-deploy-1.5.3-1.el7.centos.noarch
ovirt-engine-dwh-setup-4.0.5-1.el7.centos.noarch
ovirt-engine-tools-backup-4.0.5.5-1.el7.centos.noarch
ovirt-image-uploader-4.0.1-1.el7.centos.noarch

oVirt Node Version: oVirt Node 4.0.3


How reproducible:
100%

Steps to Reproduce:
1. Deploy an ovirt manager server
2. Add few ovirt nodes to ovirt manager and form the cluster
3. Then try to import a VM running on another CentOS 7.2 based KVM using the import option in oVirt GUI.
4. Import should fail with the above mentioned error in the case description.

Actual results:
Import failed from KVm host to oVirt environment

Expected results:
Import should work without any error.

Additional info:
In order to fix it I made the passwordless communication between 'vdsm' user from  ovirt host and the 'root' user of KVM host using the below steps:

  1. Make the keygen for vdsm user:
     
     # sudo -u vdsm ssh-keygen
  
  2.Do the first login to confirm the fingerprints using "yes":
     
     # sudo -u vdsm ssh root.xxx.xxx
  
  3. Then copy the key to the KVm host running the vm:

     # sudo -u vdsm ssh-copy-id root.xxx.xxx
   
  4. Now verify is vdsm can login without password or not:
     
     # sudo -u vdsm ssh root.xxx.xxx


Now login to ovirt admin portal and try to import the VM to the source host with  "qemu+ssh://root.xxx.xxx/system" as the URI.

NOTE: Import from ESXi host also failed but didnt get a change to do more analysis on that.

Comment 1 Udayendu Kar 2017-02-24 11:10:02 UTC
Found a similar bug with Xen: https://bugzilla.redhat.com/show_bug.cgi?id=1303548

Comment 2 Michal Skrivanek 2017-02-25 06:16:18 UTC
can you provide ssh -vv output from step 4 please?

Comment 3 Udayendu Kar 2017-02-25 09:39:15 UTC
Hi,

Please find the requested out put:

# sudo -u vdsm ssh -vv root.xxx.xxx
OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 56: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to xxx.xxx.xxx.xxx0 [xxx.xxx.xxx.xxx] port 22.
debug1: Connection established.
debug1: identity file /var/lib/vdsm/.ssh/id_rsa type 1
debug1: identity file /var/lib/vdsm/.ssh/id_rsa-cert type -1
debug1: identity file /var/lib/vdsm/.ssh/id_dsa type -1
debug1: identity file /var/lib/vdsm/.ssh/id_dsa-cert type -1
debug1: identity file /var/lib/vdsm/.ssh/id_ecdsa type -1
debug1: identity file /var/lib/vdsm/.ssh/id_ecdsa-cert type -1
debug1: identity file /var/lib/vdsm/.ssh/id_ed25519 type -1
debug1: identity file /var/lib/vdsm/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1
debug1: match: OpenSSH_6.6.1 pat OpenSSH_6.6.1* compat 0x04000000
debug2: fd 3 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: curve25519-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ecdsa-sha2-nistp256-cert-v01,ecdsa-sha2-nistp384-cert-v01,ecdsa-sha2-nistp521-cert-v01,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519-cert-v01,ssh-rsa-cert-v01,ssh-dss-cert-v01,ssh-rsa-cert-v00,ssh-dss-cert-v00,ssh-ed25519,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm,aes256-gcm,chacha20-poly1305,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm,aes256-gcm,chacha20-poly1305,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc.se
debug2: kex_parse_kexinit: hmac-md5-etm,hmac-sha1-etm,umac-64-etm,umac-128-etm,hmac-sha2-256-etm,hmac-sha2-512-etm,hmac-ripemd160-etm,hmac-sha1-96-etm,hmac-md5-96-etm,hmac-md5,hmac-sha1,umac-64,umac-128,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5-etm,hmac-sha1-etm,umac-64-etm,umac-128-etm,hmac-sha2-256-etm,hmac-sha2-512-etm,hmac-ripemd160-etm,hmac-sha1-96-etm,hmac-md5-96-etm,hmac-md5,hmac-sha1,umac-64,umac-128,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib,zlib
debug2: kex_parse_kexinit: none,zlib,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: curve25519-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm,aes256-gcm,chacha20-poly1305,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm,aes256-gcm,chacha20-poly1305,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc.se
debug2: kex_parse_kexinit: hmac-md5-etm,hmac-sha1-etm,umac-64-etm,umac-128-etm,hmac-sha2-256-etm,hmac-sha2-512-etm,hmac-ripemd160-etm,hmac-sha1-96-etm,hmac-md5-96-etm,hmac-md5,hmac-sha1,umac-64,umac-128,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5-etm,hmac-sha1-etm,umac-64-etm,umac-128-etm,hmac-sha2-256-etm,hmac-sha2-512-etm,hmac-ripemd160-etm,hmac-sha1-96-etm,hmac-md5-96-etm,hmac-md5,hmac-sha1,umac-64,umac-128,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib
debug2: kex_parse_kexinit: none,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: setup hmac-md5-etm
debug1: kex: server->client aes128-ctr hmac-md5-etm none
debug2: mac_setup: setup hmac-md5-etm
debug1: kex: client->server aes128-ctr hmac-md5-etm none
debug1: kex: curve25519-sha256 need=16 dh_need=16
debug1: kex: curve25519-sha256 need=16 dh_need=16
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 86:80:49:84:13:42:5a:46:a5:7b:70:50:bb:c5:23:02
debug1: Host 'xxx.xxx.xxx.xxx' is known and matches the ECDSA host key.
debug1: Found key in /var/lib/vdsm/.ssh/known_hosts:1
debug1: ssh_ecdsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /var/lib/vdsm/.ssh/id_rsa (0x7f01528fd740),
debug2: key: /var/lib/vdsm/.ssh/id_dsa ((nil)),
debug2: key: /var/lib/vdsm/.ssh/id_ecdsa ((nil)),
debug2: key: /var/lib/vdsm/.ssh/id_ed25519 ((nil)),
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug2: we did not send a packet, disable method
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure.  Minor code may provide more information
No Kerberos credentials available

debug1: Unspecified GSS failure.  Minor code may provide more information
No Kerberos credentials available

debug1: Unspecified GSS failure.  Minor code may provide more information


debug1: Unspecified GSS failure.  Minor code may provide more information
No Kerberos credentials available

debug2: we did not send a packet, disable method
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /var/lib/vdsm/.ssh/id_rsa
debug2: we sent a publickey packet, wait for reply
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug2: input_userauth_pk_ok: fp a7:54:51:d7:44:8c:f1:3c:17:3e:16:18:d5:76:65:17
debug1: key_parse_private2: missing begin marker
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
Authenticated to xxx.xxx.xxx.xxx ([xxx.xxx.xxx.xxx]:22).
debug1: channel 0: new [client-session]
debug2: channel 0: send open
debug1: Requesting no-more-sessions
debug1: Entering interactive session.
debug2: callback start
debug2: fd 3 setting TCP_NODELAY
debug2: client_session2_setup: id 0
debug2: channel 0: request pty-req confirm 1
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
debug2: channel 0: request env confirm 0
debug2: channel 0: request shell confirm 1
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel_input_status_confirm: type 99 id 0
debug2: PTY allocation request accepted on channel 0
debug2: channel 0: rcvd adjust 2097152
debug2: channel_input_status_confirm: type 99 id 0
debug2: shell request accepted on channel 0
Last login: Fri Feb 24 22:18:09 2017 from yyy.yyy.yyy.yyy


Here I have masked the KVM host ip with xxx.xxx.xxx.xxx and ovirt node ip with yyy.yyy.yyy.yyy . These are the only change. 

Let me know if you need further info from my side.

Thanks,
--Uday

Comment 4 Tomáš Golembiovský 2017-02-27 14:43:08 UTC
Could you also try if this works or not:

    sudo -u vdsm virsh -c 'qemu+ssh://root.xxx.xxx/system' list

Comment 5 Michal Skrivanek 2017-02-28 07:54:14 UTC
(In reply to Tomáš Golembiovský from comment #4)
> Could you also try if this works or not:
> 
>     sudo -u vdsm virsh -c 'qemu+ssh://root.xxx.xxx/system' list

this should probably be added to documentation as an additional troubleshooting step. There might be weird libvirt configurations out there...

Comment 6 Udayendu Kar 2017-03-05 10:14:08 UTC
# sudo -u vdsm virsh -c 'qemu+ssh://root.xxx.xxx/system' list
 Id    Name                           State
----------------------------------------------------

The above command executed successfully but unable to get the VM list though One Vm is running on that host.

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     xxx-xxxxx-xxx                shut off


Let me know if you need more info from my side.

Thanks,
Uday

Comment 7 Tomáš Golembiovský 2017-03-08 12:21:11 UTC
Could you also attach the vdsm.log from the period around VM import?

Comment 8 Udayendu Kar 2017-03-09 11:37:23 UTC
Created attachment 1261515 [details]
vdsm log when vm import failed.

Comment 9 Tomáš Golembiovský 2017-03-10 13:22:53 UTC
I tried to reproduce your error with the combination of oVirt Node 4.0.3 and CentOS. Having same versions of the RPMs you mentioned in description of the bug, I cannot reproduce the issue and everything works OK for me. Seems like there is some configuration issue in your environment.

Let's try to enable libvirt debug logs and see what 'ssh' command libvirt is actually trying to invoke. On the oVirt Node issue the command:

    # systemctl edit vdsmd

Enter the following into the text file:

    [Service]
    Environment="LIBVIRT_DEBUG=1"

Save and close the file.
Then open /etc/vdsm/vdsm.conf and in the "[vars]" section add the following line:

    libvirt_env_variable_log_outputs = 1:file:/tmp/libvirt_client.log

Save and close the file. Then restart vdsm with the following command:

    systemctl restart vdsmd

Wait for the vdsm to get back up and perform the import of VM as usual. Once the listing of VMs fails revert all the changes to the vdsm.conf and vdsmd service so that your system is not slowed by debug logging.

Look into file /tmp/libvirt_client.log that has been created and send us the lines from around import (search for the "ssh" string). Notably the line that invokes ssh client, starting with "About to run ..."

Comment 10 Yaniv Kaul 2017-03-27 07:57:22 UTC
Please reopen when you can provide the relevant log.

Comment 11 Udayendu Kar 2017-03-27 14:38:01 UTC
Hi Yaniv,

I have started making a new setup to test it again. So may be by tomorrow I can provide the update. So opening it again and sorry for the delay.

Comment 12 Tomas Jelinek 2017-04-05 10:37:37 UTC
Im assuming it did not happen after reinstall. If it indeed happened and you have all the logs, please reopen and attach the logs.

Comment 13 Udayendu Kar 2017-04-09 01:57:29 UTC
Hi,

Sorry for the delay. I tried to reproduce it with a new setup but unable to reproduce it. This indicates that it was specific to the old setup.

So closing it now.

Thanks,
Uday

Comment 14 Nicolas Ecarnot 2019-04-01 15:02:44 UTC
Hello,

I'd like to re-open this bug.
On a 4.3.1 DC, I'm trying to import VMs from a libvirt setup.
Manually, from the SPM, I can successfully access the remote libvirt :

root@mvm04:~# LANG=C virsh -c 'qemu+ssh://toor.sdis38.fr/system' list
 Id    Name                           State
----------------------------------------------------
 6     uc-7-vm04.sdis38.fr            running
 7     uc-7-vm05.sdis38.fr            running
 8     uc-7-vm03.sdis38.fr            running
 10    uc-7-vm02.sdis38.fr            running

But when trying the same through the web GUI, it is failing.

The vdsm.log is giving :

2019-04-01 16:52:19,070+0200 ERROR (jsonrpc/2) [root] error connecting to hypervisor (v2v:194)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 192, in get_external_vm_names
    passwd=password)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 107, in open_connection
    return function.retry(libvirtOpen, timeout=10, sleep=0.2)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 58, in retry
    return func()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 104, in openAuth
    if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: Cannot recv data: Permission denied, please try again.^M
Permission denied, please try again.^M
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer
2019-04-01 16:52:19,071+0200 INFO  (jsonrpc/2) [api.host] FINISH getExternalVMNames return={'status': {'message': 'Cannot recv data: Permission denied, please try again.\r\nPermission denied, please try again.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).: Connection reset by peer', 'code': 65}} from=::ffff:192.168.39.60,58876, flow_id=c456bc93-9bb0-4d70-a0b9-cb65a53cbc66 (api:54)
2019-04-01 16:52:19,071+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getExternalVMNames failed (error 65) in 10.01 seconds (__init__:312)


As I've setup ssh keys, I don't need no password.
Though, I've also tried with and without login+password, and nothing is working.
The target's /var/log/secure seems to show that the connection is tried WITHOUT using any key.
How can I be sure / how can I debug / see whether the SSH call is using the key?

Comment 15 Tomáš Golembiovský 2019-04-02 08:28:58 UTC
(In reply to Nicolas Ecarnot from comment #14)
> Hello,
> 
> I'd like to re-open this bug.
> On a 4.3.1 DC, I'm trying to import VMs from a libvirt setup.
> Manually, from the SPM, I can successfully access the remote libvirt :
> 
> root@mvm04:~# LANG=C virsh -c 'qemu+ssh://toor.sdis38.fr/system'

We run the conversion process as 'vdsm' user. You have to use:
  # sudo -u vdsm virsh ...
  # sudo -u vdsm ssh ...

Comment 16 Nicolas Ecarnot 2019-04-04 09:39:00 UTC
Hello,

As I see people can not reproduce it *with oVirt nodes*, and as I'm still facing this issue on CentOS hosts, I guess some specific setup has been done on nodes.

The issue is that when running it manually, either as vdsm or as root it is working :

root@hv06:~# sudo -u vdsm virsh -c 'qemu+ssh://toor.sdis38.fr/system' list
toor.sdis38.fr's password: 
 ID    Nom                            État
----------------------------------------------------
 6     uc-7-vm04.sdis38.fr            en cours d'exécution
 7     uc-7-vm05.sdis38.fr            en cours d'exécution
 8     uc-7-vm03.sdis38.fr            en cours d'exécution
 10    uc-7-vm02.sdis38.fr            en cours d'exécution

root@hv06:~# virsh -c 'qemu+ssh://toor.sdis38.fr/system' list
 ID    Nom                            État
----------------------------------------------------
 6     uc-7-vm04.sdis38.fr            en cours d'exécution
 7     uc-7-vm05.sdis38.fr            en cours d'exécution
 8     uc-7-vm03.sdis38.fr            en cours d'exécution
 10    uc-7-vm02.sdis38.fr            en cours d'exécution

BUT when running it through the GUI, it is failing. See my logs above comment #14

Comment 17 Nicolas Ecarnot 2019-04-04 11:58:15 UTC
Hello,

After a while, I just read the fine manual and found out that (as Uday's workaround was), I had to create and the a SSH key :

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html-single/virtual_machine_management_guide/index#Importing_a_Virtual_Machine_from_KVM

As a side note, I did that only for the present SPM, but this role will float around my hosts. I'll have either to do that for every host, or move the SPM role whenever I'll want to import again.

Comment 18 Tomáš Golembiovský 2019-04-04 12:44:06 UTC
(In reply to Nicolas Ecarnot from comment #17)

> As a side note, I did that only for the present SPM, but this role will
> float around my hosts. I'll have either to do that for every host, or move
> the SPM role whenever I'll want to import again.

You should be able to pick a host in the Import dialog. You can always pick the same host no matter which is SPM now.


Note You need to log in before you can comment on or make changes to this bug.