Bug 1209836 - vdsm service is changed to inactive (dead) after setup rhevh 7.1 network via TUI
Summary: vdsm service is changed to inactive (dead) after setup rhevh 7.1 network via TUI
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.6.0
Assignee: Douglas Schilling Landgraf
QA Contact: Virtualization Bugs
URL:
Whiteboard: node
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-08 10:04 UTC by Ying Cui
Modified: 2016-02-10 20:10 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-06-16 14:32:16 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
varlog.tar.gz (102.67 KB, application/x-gzip)
2015-04-08 10:24 UTC, Ying Cui
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 42239 0 master MERGED defaults: Add vdsmd to stop/start in network conf 2020-10-05 22:09:55 UTC

Description Ying Cui 2015-04-08 10:04:07 UTC
Description of problem:
vdsmd is running as default after rhevh 7.1 installation. But after setup rhevh network via TUI, then check vdsmd, it is inactive (dead) after rhevh network setup.

----rhevh installation and boot, not setup network-----
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
   Active: active (running) since Wed 2015-04-08 09:15:43 UTC; 14min ago
  Process: 15869 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 16002 (vdsm)
   CGroup: /system.slice/vdsmd.service
           └─16002 /usr/bin/python /usr/share/vdsm/vdsm

Apr 08 09:15:43 localhost vdsmd_init_common.sh[15869]: vdsm: Running test_space
Apr 08 09:15:43 localhost vdsmd_init_common.sh[15869]: vdsm: Running test_lo
Apr 08 09:15:43 localhost systemd[1]: Started Virtual Desktop Server Manager.
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 client step 2
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 parse_server_challenge()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 ask_user_info()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 client step 2
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 ask_user_info()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 make_client_response()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 client step 3
--------------------------------------------------------

----after setup network via RHEVH TUI-------------------
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
   Active: inactive (dead) since Wed 2015-04-08 09:30:22 UTC; 11min ago
  Process: 16894 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
  Process: 16002 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/share/vdsm/vdsm (code=exited, status=0/SUCCESS)
  Process: 15869 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 16002 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/vdsmd.service

Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 parse_server_challenge()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 ask_user_info()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 client step 2
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 ask_user_info()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 make_client_response()
Apr 08 09:15:45 localhost python[16002]: DIGEST-MD5 client step 3
Apr 08 09:30:17 localhost systemd[1]: Stopping Virtual Desktop Server Manager...
Apr 08 09:30:22 localhost python[16002]: DIGEST-MD5 client mech dispose
Apr 08 09:30:22 localhost python[16002]: DIGEST-MD5 common mech dispose
Apr 08 09:30:22 localhost vdsmd_init_common.sh[16894]: vdsm: Running run_final_hooks
Apr 08 09:30:22 localhost systemd[1]: Stopped Virtual Desktop Server Manager.
--------------------------------------------------------

Version-Release number of selected component (if applicable):
# rpm -q ovirt-node ovirt-node-plugin-vdsm vdsm 
ovirt-node-3.2.2-3.el7.noarch
ovirt-node-plugin-vdsm-0.2.0-20.el7ev.noarch
vdsm-4.16.13-1.el7ev.x86_64
# cat /etc/system-release
Red Hat Enterprise Virtualization Hypervisor 7.1 (20150402.0.el7ev)

How reproducible:
100%

Steps to Reproduce:
1. RHEVH 7.1 installation.
2. boot rhevh 7.1.
3. login rhevh 7.1.
4. check vdsmd.service status, it is running.
5. Setup network via rhevh TUI.
6. check vdsmd.service status, it is inactive(dead)

Actual results:
vdsmd inactive (dead) after setup rhevh 7.1 network via TUI

Expected results:
if rhevh 7.1 running vdsmd as default, so setup network should still keep vdsmd running after network setup.
OR we consider vdsmd is not running as default after rhevh 7.1 installation which node is not managed by rhevm.

Comment 1 Ying Cui 2015-04-08 10:24:13 UTC
feel free to change the correct component when we get the solution.

Comment 2 Ying Cui 2015-04-08 10:24:50 UTC
Created attachment 1012162 [details]
varlog.tar.gz

Comment 3 Fabian Deutsch 2015-04-23 14:55:29 UTC
We've seen this before.
vdsmd is not coming up because libvirtd is not running.

Ying, can you please confirm that this is the case? Then this is a symptom of (yet again) bug 1213730 or alike.

Comment 4 Ying Cui 2015-04-24 04:49:17 UTC
(In reply to Fabian Deutsch from comment #3)
> We've seen this before.
> vdsmd is not coming up because libvirtd is not running.
> 
> Ying, can you please confirm that this is the case? Then this is a symptom
> of (yet again) bug 1213730 or alike.

libvirtd is running, but vdsmd is not coming up. From the appearance, not alike of this bug 1213730. 

Test version:
# cat /etc/rhev-hypervisor-release 
Red Hat Enterprise Virtualization Hypervisor 7.1 (20150420.0.el7ev)
# rpm -q ovirt-node vdsm
ovirt-node-3.2.2-3.el7.noarch
vdsm-4.16.13.1-1.el7ev.x86_64

--------
# systemctl status libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Fri 2015-04-24 03:56:28 UTC; 41min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 12140 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─12140 /usr/sbin/libvirtd --listen

Apr 24 03:56:28 localhost systemd[1]: Started Virtualization daemon.
--------

--------
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
   Active: inactive (dead) since Fri 2015-04-24 04:35:28 UTC; 2min 26s ago
  Process: 17454 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
  Process: 16005 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/share/vdsm/vdsm (code=exited, status=0/SUCCESS)
  Process: 15873 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
 Main PID: 16005 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/vdsmd.service

Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 parse_server_challenge()
Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 ask_user_info()
Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 client step 2
Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 ask_user_info()
Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 make_client_response()
Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 client step 3
Apr 24 04:35:25 localhost systemd[1]: Stopping Virtual Desktop Server Manager...
Apr 24 04:35:28 localhost python[16005]: DIGEST-MD5 client mech dispose
Apr 24 04:35:28 localhost python[16005]: DIGEST-MD5 common mech dispose
Apr 24 04:35:28 localhost vdsmd_init_common.sh[17454]: vdsm: Running run_final_hooks
Apr 24 04:35:28 localhost systemd[1]: Stopped Virtual Desktop Server Manager.

--------

Comment 5 Fabian Deutsch 2015-04-24 05:10:12 UTC
Thanks for the clarification.

Looking at the description it is obvious something else than assumed in comment 3.

But I do not see a heavy impact on functionality.

Comment 6 Douglas Schilling Landgraf 2015-06-11 17:52:59 UTC
Hi Ying,

Below my investigation:

During the configure of network we stop and start the following services:
network, ntpd, ntpdate, rpcbind, nfslock, rpcidmapd, nfs-idmapd, rpcgssd

In VDSM systemd file it contains:
Requires=multipathd.service libvirtd.service time-sync.target \
         iscsid.service rpcbind.service supervdsmd.service sanlock.service

The key service in this case is rpcbind, the service will be stopped/started and during the stop action vdsm receives the signal of terminate as a required service is down [1]. This is not a exclusive behavior of RHEV-H, if you try the command in EL7 or Fedora platforms you should see the vdsm service down if you stop any required service.

Anyway, in RHEV-H scenario, most of the cases, this report shoulnd't be a problem because during the deploy of host the host-deploy starts VDSM. However, I see a case where this report can be triggered and put the host down. Example: If host is deployed and UP, user want to change a network settings via TUI and vdsm will be down/host down after that.

[1] Requires=
    Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated.
http://www.freedesktop.org/software/systemd/man/systemd.unit.html

Comment 7 Fabian Deutsch 2015-06-16 13:44:11 UTC
Lowering the priority because there is no functional impact.

Comment 8 Douglas Schilling Landgraf 2015-06-16 14:32:16 UTC
The scenario that I have described in comment#6 won't happen because as soon the node get registered and approved the network changes via TUI shouldn't be possible. Additionally, host-deploy will start VDSM as soon the node got approved so this report won't have any impact.


Note You need to log in before you can comment on or make changes to this bug.