Description of problem: Adding RHEV-H to engine fails on RHEV-H 7.1 because the host key has the wrong permissions. Version-Release number of selected component (if applicable): 7.1 How reproducible: Steps to Reproduce: 1. Install, configure network with dhcp 2. Go to RHEV-M page and set a root password (to prepare from-engine-side-registration) 3. Add host from engine side Actual results: Engine fails to access host because sshh could not be startde (loko at the journal) Expected results: host can be added to the engine Additional info:
Created attachment 991415 [details] Screenshot showing the error and fix
The build was rhev-hypervisor7-7.1-20150213.0.iso
Hi Fabian, I couldn't reproduce your report with dhcp or static address. Is it a clean system? Any additional info from /var/log dir? audit2allow? RHEV-M: 3.5.0-0.32.el6ev # cat /etc/redhat-release Red Hat Enterprise Virtualization Hypervisor 7.1 (20150213.0.el7ev) # cat /etc/sysconfig/network-scripts/ifcfg-ens3 # Generated by VDSM version 4.16.8.1-6.el7ev DEVICE=ens3 HWADDR=52:54:00:10:0f:83 BRIDGE=rhevm ONBOOT=yes MTU=1500 NM_CONTROLLED=no # cat /etc/sysconfig/network-scripts/ifcfg-rhevm # Generated by VDSM version 4.16.8.1-6.el7ev DEVICE=rhevm TYPE=Bridge DELAY=0 STP=off ONBOOT=yes BOOTPROTO=dhcp <---------- dhcp MTU=1500 # ls -Zla /etc/ssh/ssh_host_rsa_key -rw-r-----. 1 system_u:object_r:sshd_key_t:s0 root ssh_keys 1675 Feb 17 02:12 /etc/ssh/ssh_host_rsa_key # getenforce Enforcing DEFROUTE=yes NM_CONTROLLED=no HOTPLUG=no # tail -f /var/log/secure Feb 17 02:17:40 localhost sshd[1842]: Received signal 15; terminating. Feb 17 02:17:40 localhost sshd[16492]: Server listening on 0.0.0.0 port 22. Feb 17 02:17:40 localhost sshd[16492]: Server listening on :: port 22. Feb 17 02:17:40 localhost sshd[16492]: Received signal 15; terminating. Feb 17 02:17:40 localhost sshd[16505]: Server listening on 0.0.0.0 port 22. Feb 17 02:17:40 localhost sshd[16505]: Server listening on :: port 22. Feb 17 02:18:02 localhost sshd[16553]: Accepted password for root from 192.168.122.79 port 45568 ssh2 Feb 17 02:18:02 localhost sshd[16553]: pam_unix(sshd:session): session opened for user root by (uid=0) Feb 17 02:18:04 localhost sshd[16553]: pam_unix(sshd:session): session closed for user root Feb 17 02:18:04 localhost sshd[16579]: Accepted password for root from 192.168.122.79 port 45569 ssh2 Feb 17 02:18:04 localhost sshd[16579]: pam_unix(sshd:session): session opened for user root by (uid=0) Feb 17 02:18:35 localhost sshd[16579]: pam_unix(sshd:session): session closed for user root
Ah, it could have been an upgrade from a RHEV-H 7.0 to RHEV-H 7.1. Steps: 1. Install latest RHEV-H 7.0 and configure networking and enable SSH by setting a password in the Engine page 2. Upgrade RHEV-H to RHEV-H 7.1 image 3. Register RHEV-H to RHEV-M from the Engine side
(In reply to Fabian Deutsch from comment #4) > Ah, it could have been an upgrade from a RHEV-H 7.0 to RHEV-H 7.1. > > Steps: > > 1. Install latest RHEV-H 7.0 and configure networking and enable SSH by > setting a password in the Engine page Hi Fabian, Do you remember the RHEV-H 7.0 iso version?
It was the GA one: rhev-hypervisor7-7.0-20150127.0.el7ev
Hi, I have reproduced the report using these steps: 1) Installed rhev-hypervisor7-7.0-20150127.0.el7ev 2) Setup network via Node TUI 3) Setup password to Node be added via RHEV-M via TUI (RHEV-M tab) 4) Copy the 7.1 iso to Node for upgrade (or upgrade the node via CDROM) -> scp rhev-hypervisor7-7.1-20150213.0.iso root.X.X:/data/updates/ovirt-node-image.iso 5) ssh to node and execute the upgrade on Node: # /usr/share/vdsm-reg/vdsm-reg-upgrade 6) Try to add the node via RHEV-M (it complains)
Hi Fabian, Here my findings: Before upgrade ================ # rpm -qa | grep -i ssh openssh-server-6.4p1-8.el7 libssh2-1.4.3-9.el7 openssh-clients-6.4p1-8.el7 openssh-6.4p1-8.el7 -rw-r----- 1 root ssh_keys 1679 Feb 18 17:38 /etc/ssh/ssh_host_rsa_key ^^^^ keep in mind this group -rw-r--r-- 1 root root 382 Feb 18 17:38 /etc/ssh/ssh_host_rsa_key.pub # cat /etc/group <snip> ssh_keys:x:998: <------------------- keep in mind these values unbound:x:996: <------------------- </snip> After upgrade ============== # rpm -qa | grep -i ssh openssh-server-6.6.1p1-11.el7.x86_64 openssh-6.6.1p1-11.el7.x86_64 openssh-clients-6.6.1p1-11.el7.x86_64 libssh2-1.4.3-8.el7.x86_64 -rw-r----- 1 root unbound 1679 Feb 18 17:38 /etc/ssh/ssh_host_rsa_key ^^^^ group owner changed from ssh_keys -rw-r--r-- 1 root root 382 Feb 18 17:38 /etc/ssh/ssh_host_rsa_key.pub # cat /etc/group <snip> ssh_keys:x:999: <------------------------ It was 998 before upgrade unbound:x:998: <------------------------ It was 996 before upgrade </snip>
I see. We had a similar issue with CIM, where the uid's changed between upgrades. Th easiest way for this bug is probably to change the owner of the file. For the future we should find out how to handle uids, Atomic has an approach for this.
I could reproduce it with the follow steps: 1. Install rhev-hypervisor7-7.0-20150127.0.el7ev and configure networking and enable SSH by setting a password in the Engine page 2. Upgrade RHEV-H to rhev-hypervisor7-7.1-20150213.0.iso 3. Register RHEV-H to RHEV-M(Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.32.el6ev) from the Engine side
Version-Release number of selected component (if applicable): rhev-hypervisor7-7.1-20151015.0.el7ev ovirt-node-3.2.3-23.el7.noarch rhev-hypervisor7-7.2-20151112.1.el7ev ovirt-node-3.6.0-0.20.20151103git3d3779a.el7ev.noarch Test steps: 1. Install rhev-hypervisor7-7.1-20151015.0.el7ev 2. Login RHEV-H, configure network via DHCP and setup root password in the RHEV-M page 3. Upgrade RHEV-H to rhev-hypervisor7-7.2-20151112.1.el7ev 4. Register RHEV-H to RHEV-M(Red Hat Enterprise Virtualization Manager Version: 3.6.0.3-0.1.el6) from the Engine side. Test results: 4. Register RHEV-H to RHEV-M from the Engine side successful. So this bug is fixed on rhev-hypervisor7-7.2-20151112.1.el7ev, I will change the status to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0378.html