Bug 1434816

Summary: Selinux issue blocks running VMs on 4.1 ngn after update from 4.0 ngn as virtlogd fails to start
Product: [oVirt] ovirt-node Reporter: Jiri Belka <jbelka>
Component: Installation & UpdateAssignee: Yuval Turgeman <yturgema>
Status: CLOSED CURRENTRELEASE QA Contact: Huijuan Zhao <huzhao>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.1CC: bugs, cshao, dguo, dougsland, hejeroaz, huzhao, jbelka, jiawu, mgoldboi, pbrilla, qiyuan, sbonazzo, weiwang, yaniwang, ycui, yzhao
Target Milestone: ovirt-4.1.1-1Flags: rule-engine: ovirt-4.1+
rule-engine: blocker+
mgoldboi: planning_ack+
sbonazzo: devel_ack+
cshao: testing_ack+
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: imgbased-0.9.22-0.1.el7ev Doc Type: Bug Fix
Doc Text:
Cause: selinux blocks virtlogd.conf Consequence: virtlogd fails to start Fix: running setfiles with the new layer's file_context on the new /etc and /var, fixing selinux labels Result: virtlogd starts successfully
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-04-21 09:33:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Node RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
comment 16: All logs in /tmp and /var/log, sosreport none

Description Jiri Belka 2017-03-22 12:18:29 UTC
Description of problem:

Can't run VMs on 4.1 ngn after update from 4.0 ngn as virtlogd is not running. Surprisingly vdsmd is happily running, the host is up even virtlogd is down.

(Again we have not good check when allowing hosts being Up... See BZ1434538.)

Anyway, see below:

[root@dell-r210ii-04 ~]# systemctl list-dependencies vdsmd --plain --no-pager | grep virt
  libvirtd.service
  ovirt-imageio-daemon.service
  ^^ where is virtlogd ?

[root@dell-r210ii-04 ~]# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: failed (Result: start-limit) since Wed 2017-03-22 12:53:34 CET; 16min ago
     Docs: man:virtlogd(8)
           http://libvirt.org
  Process: 24273 ExecStart=/usr/sbin/virtlogd $VIRTLOGD_ARGS (code=exited, status=1/FAILURE)
 Main PID: 24273 (code=exited, status=1/FAILURE)

Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: Starting Virtual machine log manager...
Mar 22 12:53:34 dell-r210ii-04.example.com virtlogd[24273]: 2017-03-22 11:53:34.759+0000: 24273: info : libvirt version: 2.0.0, package: 10.el7_3.5 (Red Hat, Inc. <http://bugzilla.re...redhat.com)
Mar 22 12:53:34 dell-r210ii-04.example.com virtlogd[24273]: 2017-03-22 11:53:34.759+0000: 24273: info : hostname: dell-r210ii-04.example.com
Mar 22 12:53:34 dell-r210ii-04.example.com virtlogd[24273]: 2017-03-22 11:53:34.759+0000: 24273: error : main:1033 : Can't load config file: Failed to open file '/etc/libvirt/virtlog...rtlogd.conf
Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: virtlogd.service: main process exited, code=exited, status=1/FAILURE
Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: Unit virtlogd.service entered failed state.
Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: virtlogd.service failed.
Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: start request repeated too quickly for virtlogd.service
Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: Failed to start Virtual machine log manager.
Mar 22 12:53:34 dell-r210ii-04.example.com systemd[1]: virtlogd.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

Practically it seems we need virtlogd so vdsmd daemon should fail instead of having false positive that everything is great and running.

[root@dell-r210ii-04 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-03-22 12:42:42 CET; 27min ago
  Process: 2020 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 2771 (vdsm)
   CGroup: /system.slice/vdsmd.service
           ├─ 2771 /usr/bin/python2 /usr/share/vdsm/vdsm
           ├─23727 /usr/libexec/ioprocess --read-pipe-fd 75 --write-pipe-fd 74 --max-threads 10 --max-queued-requests 10
           ├─23757 /usr/libexec/ioprocess --read-pipe-fd 95 --write-pipe-fd 94 --max-threads 10 --max-queued-requests 10
           ├─23767 /usr/libexec/ioprocess --read-pipe-fd 103 --write-pipe-fd 102 --max-threads 10 --max-queued-requests 10
           └─23773 /usr/libexec/ioprocess --read-pipe-fd 110 --write-pipe-fd 108 --max-threads 10 --max-queued-requests 10

Mar 22 12:53:34 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.com.redhat.rhevm.vdsm already removed
Mar 22 12:53:34 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.org.qemu.guest_agent.0 already removed
Mar 22 12:58:34 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.com.redhat.rhevm.vdsm already removed
Mar 22 12:58:34 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.org.qemu.guest_agent.0 already removed
Mar 22 13:02:58 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/d03b6856-7000-44a0-9630-fa587addd141.com.redhat.rhevm.vdsm already removed
Mar 22 13:02:58 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/d03b6856-7000-44a0-9630-fa587addd141.org.qemu.guest_agent.0 already removed
Mar 22 13:03:38 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.com.redhat.rhevm.vdsm already removed
Mar 22 13:03:38 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.org.qemu.guest_agent.0 already removed
Mar 22 13:08:45 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.com.redhat.rhevm.vdsm already removed
Mar 22 13:08:45 dell-r210ii-04.example.com vdsm[2771]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/50e7f96b-d6b4-4e67-9622-95496fc8bac5.org.qemu.guest_agent.0 already removed


Version-Release number of selected component (if applicable):
vdsm-4.19.7-1.el7ev.x86_64
redhat-virtualization-host-image-update-4.1-20170308.1.el7_3.noarch

How reproducible:
just happens

Steps to Reproduce:
1. 4.0 ngn in 4.0 engine (in fact this ngn was added into engine which was 3.6
   engine with 3.5 clusters, then 4.0 engine and now 4.1)
2. upgrade 4.0 ngn to 4.1 ngn
3. try to run a VM on this 4.1 ngn

Actual results:
VM <myvmname> is down with error. Exit message: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused.

Expected results:
should work

Additional info:
Mar 22 12:38:42 Installed: redhat-virtualization-host-image-update-4.1-20170308.1.el7_3.noarch
Mar 22 12:38:42 Erased: redhat-virtualization-host-image-update-placeholder-4.0-7.1.el7.noarch

Comment 3 Jiri Belka 2017-03-22 12:23:22 UTC
This impacts upgrade from 4.0 to 4.1 as I cannot migrate runnings VMs to 4.1 ngn host.

Comment 4 Jiri Belka 2017-03-22 12:27:24 UTC
(In reply to Jiri Belka from comment #3)
> This impacts upgrade from 4.0 to 4.1 as I cannot migrate runnings VMs to 4.1
> ngn host.

This was still on 4.0 cluster in 4.1 engine.

Comment 5 Jiri Belka 2017-03-22 14:18:39 UTC
it's selinux issue...

# grep 'virtlogd.conf' /var/log/audit/audit.log | tail -n2 
type=AVC msg=audit(1490192124.105:195): avc:  denied  { read } for  pid=23995 comm="virtlogd" name="virtlogd.conf" dev="dm-6" ino=6947658 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:virt_etc_t:s0 tclass=file
type=AVC msg=audit(1490192124.105:196): avc:  denied  { open } for  pid=23995 comm="virtlogd" path="/etc/libvirt/virtlogd.conf" dev="dm-6" ino=6947658 scontext=system_u:system_r:virtlogd_t:s0-s0:c0.c1023 tcontext=system_u:object_r:virt_etc_t:s0 tclass=file

and with disabled selinux vm runs ok

[root@dell-r210ii-04 ~]# getenforce ; systemctl status virtlogd | grep Active
Permissive
   Active: active (running) since Wed 2017-03-22 15:15:24 CET; 2min 34s ago
[root@dell-r210ii-04 ~]# pgrep qemu-kvm
24141
[root@dell-r210ii-04 ~]# pgrep 'qemu-kvm.*ttt'
[root@dell-r210ii-04 ~]# pgrep -f 'qemu-kvm.*ttt'
24141

Comment 6 Jiri Belka 2017-03-22 14:35:02 UTC
I checked if virtlogd is running on EL7 with 4.1 vdsm after upgrade from 4.0 vdsm and it does not run but... VMs are started there without issue.

[root@dell-r210ii-13 ~]# systemctl list-unit-files | grep virtlogd
virtlogd.service                              indirect
virtlogd.socket                               enabled 
[root@dell-r210ii-13 ~]# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:virtlogd(8)
           http://libvirt.org
[root@dell-r210ii-13 ~]# systemctl start virtlogd
[root@dell-r210ii-13 ~]# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: active (running) since Wed 2017-03-22 15:32:40 CET; 1s ago
     Docs: man:virtlogd(8)
           http://libvirt.org
 Main PID: 19605 (virtlogd)
   CGroup: /system.slice/virtlogd.service
           └─19605 /usr/sbin/virtlogd

Mar 22 15:32:40 dell-r210ii-13.example.com systemd[1]: Started Virtual machine log manager.
Mar 22 15:32:40 dell-r210ii-13.example.com systemd[1]: Starting Virtual machine log manager...

and selinux issues...

Comment 7 Sandro Bonazzola 2017-03-27 08:06:43 UTC
Ryan can you please investigate?
Jiri, can you please attach a full sos report from the upgraded host?
I'm missing several info, including audit logs.

Comment 8 Sandro Bonazzola 2017-03-27 08:08:06 UTC
Jiri, nevermind, provided audit should be enough.

Comment 9 Sandro Bonazzola 2017-03-28 09:22:43 UTC
Jiri, is this reproducible consistently or happened once only?
Can you have also Michal have a look at your install to see why the VM gets down?

Comment 10 Jiri Belka 2017-03-29 11:24:25 UTC
(In reply to Sandro Bonazzola from comment #9)
> Jiri, is this reproducible consistently or happened once only?
> Can you have also Michal have a look at your install to see why the VM gets
> down?

Yes, it's reproducible again. Between rhvh-4.0-0.20170307.0 and rhvh-4.1-0.20170328.0.

Comment 12 Huijuan Zhao 2017-04-01 07:19:11 UTC
Can reproduce this issue.

My detailed test steps are as below:

Test version:
1. RHVH
From: RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso
To: redhat-virtualization-host-4.1-20170308.1
redhat-virtualization-host-image-update-4.1-20170308.1.el7_3.noarch
vdsm-4.19.7-1.el7ev.x86_64

2. Engine
From: 3.6.11-0.1.el6
To:   4.0.7.4-0.1.el7ev

Test steps:
1. Install RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso
2. Add RHVH-4.0-20170308.0 to engine 3.6(3.5 cluster)
   2.1 Check management network in rhvh-4.0 with "# ip address show", its name is "rhevm"
   2.2 Check the vdsmd and virtlogd status
3. Remove RHVH-4.0 from engine 3.6, then add RHVH-4.0 to engine 4.0
   3.0 Check the vdsmd and virtlogd status
   Below is detailed steps in Engine4.0 UI(cluster 4.0)
   3.1 Create Datacenter
   3.2 Create new network named "rhevm" in Networks tab in engine4.0 UI
   3.3 Create cluster using "rhevm" as Management Network in Cluster tab
   3.4 Add RHVH-4.0 to cluster created in step 3.3
       Note: If using the default management network "ovirtmgmt", add RHVH-4.0 to engine 4.0 failed, due to the management network in RHVH-4.0 is "rhevm" created in step 2. Error info like below:
       “Host dell-per730-34 does not comply with the cluster huzhao_intel networks, the following networks are missing on host: 'ovirtmgmt'”
   3.5 Add data Domain to RHVH-4.0(NFS storage)
   3.6 Add iso Domain to RHVH-4.0(NFS storage)
   3.7 Create VM on RHVH-4.0 in engine UI, VM can run successful
4. Login RHVH-4.0, download redhat-virtualization-host-image-update-4.1-20170308.1.el7_3.noarch.rpm, and update RHVH-4.0 to RHVH-4.1:
   # yum install redhat-virtualization-host-image-update-4.1-20170308.1.el7_3.noarch.rpm
5. Reboot and login new build RHVH-4.1
6. Run VM created in step 3.7 in engine4.0 UI
7. Check the vdsmd and virtlogd status

Actual steps:
1. After step 2.2, vdsmd is active, virtlogd is inactive
[root@dell-per730-34 network-scripts]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2017-04-01 03:27:14 GMT; 9min ago
  Process: 21545 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 21629 (vdsm)
   CGroup: /system.slice/vdsmd.service
           └─21629 /usr/bin/python /usr/share/vdsm/vdsm

Apr 01 03:34:11 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:34:20 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:34:34 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:34:55 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:34:57 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:35:20 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:35:30 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:35:43 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:36:05 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 03:36:06 dell-per730-34.lab.eng.pek2.redhat.com vdsm[21629]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof

[root@dell-per730-34 network-scripts]# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:virtlogd(8)
           http://libvirt.org

2, After step 3.0, same as step 2.2

3. After step 6, run VM failed. It reports:
VM huzhao_vm1 is down with error. Exit message: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': Connection refused.

4. After step 7, vdsmd is active, virtlogd is failed to active.
[root@dell-per730-34 ~]# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: failed (Result: start-limit) since Sat 2017-04-01 06:39:17 GMT; 54s ago
     Docs: man:virtlogd(8)
           http://libvirt.org
  Process: 24445 ExecStart=/usr/sbin/virtlogd $VIRTLOGD_ARGS (code=exited, status=1/FAILURE)
 Main PID: 24445 (code=exited, status=1/FAILURE)

Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: Starting Virtual machine log manager...
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com virtlogd[24445]: 2017-04-01 06:39:17.845+0000: 24445: info : libvirt version: 2.0.0, package: 10.el7_...at.com)
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com virtlogd[24445]: 2017-04-01 06:39:17.845+0000: 24445: info : hostname: dell-per730-34.lab.eng.pek2.redhat.com
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com virtlogd[24445]: 2017-04-01 06:39:17.845+0000: 24445: error : main:1033 : Can't load config file: Fai...gd.conf
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: virtlogd.service: main process exited, code=exited, status=1/FAILURE
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: Unit virtlogd.service entered failed state.
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: virtlogd.service failed.
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: start request repeated too quickly for virtlogd.service
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: Failed to start Virtual machine log manager.
Apr 01 06:39:17 dell-per730-34.lab.eng.pek2.redhat.com systemd[1]: virtlogd.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

[root@dell-per730-34 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2017-04-01 06:31:39 GMT; 8min ago
  Process: 2282 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 4008 (vdsm)
   CGroup: /system.slice/vdsmd.service
           ├─ 4008 /usr/bin/python2 /usr/share/vdsm/vdsm
           ├─10940 /usr/libexec/ioprocess --read-pipe-fd 76 --write-pipe-fd 74 --max-threads 10 --max-queued-requests 10
           └─24173 /usr/libexec/ioprocess --read-pipe-fd 60 --write-pipe-fd 59 --max-threads 10 --max-queued-requests 10

Apr 01 06:39:00 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 06:39:19 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/bbc41099-7ad7-477e-84af-68496f907780....removed
Apr 01 06:39:19 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/bbc41099-7ad7-477e-84af-68496f907780....removed
Apr 01 06:39:23 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 06:39:31 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 06:39:46 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 06:39:55 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/bbc41099-7ad7-477e-84af-68496f907780....removed
Apr 01 06:39:55 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm root WARN File: /var/lib/libvirt/qemu/channels/bbc41099-7ad7-477e-84af-68496f907780....removed
Apr 01 06:40:06 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Apr 01 06:40:09 dell-per730-34.lab.eng.pek2.redhat.com vdsm[4008]: vdsm ProtocolDetector.SSLHandshakeDispatcher ERROR Error during handshake: unexpected eof
Hint: Some lines were ellipsized, use -l to show in full.

Expected results:
1. After step6, should run VM successful
2. After step7, virtlogd should be active

Comment 13 Huijuan Zhao 2017-04-05 10:53:16 UTC
Test version:
1. RHVH
From: RHVH-4.0-20170307.0
To: redhat-virtualization-host-4.1-20170403.0
redhat-virtualization-host-image-update-4.1-20170403.0.el7_3.noarch.rpm
vdsm-4.19.10.1-1.el7ev.x86_64
imgbased-0.9.20-0.1.el7ev.noarch

2. Engine
From: 3.6.11-0.1.el6
To:   4.0.7.4-0.1.el7ev

Test steps:
1. Install RHVH-4.0-20170307.0
2. Add RHVH-4.0-20170307.0 to engine 3.6(3.5 cluster)
3. Remove RHVH-4.0 from engine 3.6, then add RHVH-4.0 to engine 4.0, create VM on RHVH-4.0 in engine UI, VM can run successful
4. Login RHVH-4.0, download redhat-virtualization-host-image-update-4.1-20170403.0.el7_3.noarch.rpm, and update RHVH-4.0 to RHVH-4.1:
   # yum install redhat-virtualization-host-image-update-4.1-20170403.0.el7_3.noarch.rpm
5. Reboot and login new build RHVH-4.1
6. Run VM in engine4.0 UI
7. Check the vdsmd and virtlogd status

Test results:
1. After step6, run VM successful
2. After step7, vdsmd and virtlogd status both are active.

[root@dhcp-10-16 ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2017-04-05 10:40:10 GMT; 6min ago
  Process: 2062 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 3698 (vdsm)
   CGroup: /system.slice/vdsmd.service
           ├─ 3698 /usr/bin/python2 /usr/share/vdsm/vdsm
           ├─ 8787 /usr/libexec/ioprocess --read-pipe-fd 79 --write-pipe-fd 78 --max-threads 10 --max-queued-requests 10
           └─23891 /usr/libexec/ioprocess --read-pipe-fd 61 --write-pipe-fd 60 --max-threads 10 --max-queued-requests 10

Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 client step 1
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 ask_user_info()
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 make_client_response()
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 client step 2
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 parse_server_challenge()
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 ask_user_info()
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 make_client_response()
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com python2[3698]: DIGEST-MD5 client step 3
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com vdsm[3698]: vdsm MOM WARN MOM not available.
Apr 05 10:40:12 dhcp-10-16.nay.redhat.com vdsm[3698]: vdsm MOM WARN MOM not available, KSM stats will be missing.

[root@dhcp-10-16 ~]# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: active (running) since Wed 2017-04-05 10:43:35 GMT; 3min 12s ago
     Docs: man:virtlogd(8)
           http://libvirt.org
 Main PID: 23938 (virtlogd)
   CGroup: /system.slice/virtlogd.service
           └─23938 /usr/sbin/virtlogd

Apr 05 10:43:35 dhcp-10-16.nay.redhat.com systemd[1]: Started Virtual machine log manager.
Apr 05 10:43:35 dhcp-10-16.nay.redhat.com systemd[1]: Starting Virtual machine log manager...
Apr 05 10:43:35 dhcp-10-16.nay.redhat.com virtlogd[23938]: libvirt version: 2.0.0, package: 10.el7_3.5 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>...hat.com)
Apr 05 10:43:35 dhcp-10-16.nay.redhat.com virtlogd[23938]: hostname: dhcp-10-16.nay.redhat.com
Apr 05 10:43:35 dhcp-10-16.nay.redhat.com virtlogd[23938]: End of file while reading data: Input/output error
Hint: Some lines were ellipsized, use -l to show in full.


So this bug is fixed in imgbased-0.9.20-0.1.el7ev.noarch, change the status to VERIFIED.

Comment 14 Red Hat Bugzilla Rules Engine 2017-04-06 07:35:09 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 15 Yuval Turgeman 2017-04-06 07:36:48 UTC
This needs some more work, moving to assigned.

Comment 16 Huijuan Zhao 2017-04-14 07:02:22 UTC
Still encountered issue during verify this bug in imgbased-0.9.22-0.1.el7ev.noarch, so I have to assign this bug.

Test version:
1. RHVH
From: RHVH-4.0-20170308.0-RHVH-x86_64-dvd1.iso
To: redhat-virtualization-host-4.1-20170413.0
    imgbased-0.9.22-0.1.el7ev.noarch

2. Engine
From: 3.6.11-0.1.el6
To:   4.0.7.4-0.1.el7ev


Test steps:
Same with comment 12.


Actual results:
After step5, miss rhvh-4.1-201700413.0 in boot entry, so can not access new build.


Expected results:
After step5, there should be rhvh-4.1-201700413.0 in boot entry, can access new build successful.

Comment 17 Huijuan Zhao 2017-04-14 07:06:15 UTC
Created attachment 1271629 [details]
comment 16: All logs in /tmp and /var/log, sosreport

Comment 18 Huijuan Zhao 2017-04-18 16:02:54 UTC
Test version:
1. RHVH
From: RHVH-4.0-20170307.0
To: redhat-virtualization-host-4.1-20170417.0
redhat-virtualization-host-image-update-4.1-20170417.0.el7_3.noarch.rpm
vdsm-4.19.10.1-1.el7ev.x86_64
imgbased-0.9.23-0.1.el7ev.noarch

2. Engine
From: 3.6.11-0.1.el6
To:   4.0.7.4-0.1.el7ev


Test steps:
Same with comment 12


Test results:
1. After step6, run VM successful
2. After step7, virtlogd is active

# systemctl status virtlogd
● virtlogd.service - Virtual machine log manager
   Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled)
   Active: active (running) since Tue 2017-04-18 15:52:51 GMT; 5min ago
     Docs: man:virtlogd(8)
           http://libvirt.org
 Main PID: 33591 (virtlogd)
   CGroup: /system.slice/virtlogd.service
           └─33591 /usr/sbin/virtlogd

Apr 18 15:52:51 dell-per730-35.lab.eng.pek2.redhat.com systemd[1]: Started Virtual machine log manager.
Apr 18 15:52:51 dell-per730-35.lab.eng.pek2.redhat.com systemd[1]: Starting Virtual machine log manager...
Apr 18 15:52:52 dell-per730-35.lab.eng.pek2.redhat.com virtlogd[33591]: libvirt version: 2.0.0, package: 10.el7_3.5 (Red Hat, Inc. <http://bugzilla.redhat.com/bug...hat.com)
Apr 18 15:52:52 dell-per730-35.lab.eng.pek2.redhat.com virtlogd[33591]: hostname: dell-per730-35.lab.eng.pek2.redhat.com
Apr 18 15:52:52 dell-per730-35.lab.eng.pek2.redhat.com virtlogd[33591]: End of file while reading data: Input/output error
Hint: Some lines were ellipsized, use -l to show in full.


So this bug is fixed in imgbased-0.9.23-0.1.el7ev.noarch, I will change the status to VERIFIED after the status is changed to ON_QA.

Comment 19 Huijuan Zhao 2017-04-19 01:57:52 UTC
Change the status to VERIFIED according to comment 18.

Comment 20 Yuval Turgeman 2017-04-23 05:33:28 UTC
*** Bug 1443752 has been marked as a duplicate of this bug. ***