Bug 1209836
Summary: | vdsm service is changed to inactive (dead) after setup rhevh 7.1 network via TUI | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Ying Cui <ycui> | ||||
Component: | ovirt-node | Assignee: | Douglas Schilling Landgraf <dougsland> | ||||
Status: | CLOSED WONTFIX | QA Contact: | Virtualization Bugs <virt-bugs> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 3.5.1 | CC: | cshao, ecohen, fdeutsch, gklein, hadong, huiwa, leiwang, lsurette, pnovotny, pstehlik, rbarry, yaniwang, ycui | ||||
Target Milestone: | --- | ||||||
Target Release: | 3.6.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | node | ||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2015-06-16 14:32:16 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Ying Cui
2015-04-08 10:04:07 UTC
feel free to change the correct component when we get the solution. Created attachment 1012162 [details]
varlog.tar.gz
We've seen this before. vdsmd is not coming up because libvirtd is not running. Ying, can you please confirm that this is the case? Then this is a symptom of (yet again) bug 1213730 or alike. (In reply to Fabian Deutsch from comment #3) > We've seen this before. > vdsmd is not coming up because libvirtd is not running. > > Ying, can you please confirm that this is the case? Then this is a symptom > of (yet again) bug 1213730 or alike. libvirtd is running, but vdsmd is not coming up. From the appearance, not alike of this bug 1213730. Test version: # cat /etc/rhev-hypervisor-release Red Hat Enterprise Virtualization Hypervisor 7.1 (20150420.0.el7ev) # rpm -q ovirt-node vdsm ovirt-node-3.2.2-3.el7.noarch vdsm-4.16.13.1-1.el7ev.x86_64 -------- # systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Fri 2015-04-24 03:56:28 UTC; 41min ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 12140 (libvirtd) CGroup: /system.slice/libvirtd.service └─12140 /usr/sbin/libvirtd --listen Apr 24 03:56:28 localhost systemd[1]: Started Virtualization daemon. -------- -------- # systemctl status vdsmd.service vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled) Active: inactive (dead) since Fri 2015-04-24 04:35:28 UTC; 2min 26s ago Process: 17454 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 16005 ExecStart=/usr/share/vdsm/daemonAdapter -0 /dev/null -1 /dev/null -2 /dev/null /usr/share/vdsm/vdsm (code=exited, status=0/SUCCESS) Process: 15873 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 16005 (code=exited, status=0/SUCCESS) CGroup: /system.slice/vdsmd.service Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 parse_server_challenge() Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 ask_user_info() Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 client step 2 Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 ask_user_info() Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 make_client_response() Apr 24 03:56:41 localhost python[16005]: DIGEST-MD5 client step 3 Apr 24 04:35:25 localhost systemd[1]: Stopping Virtual Desktop Server Manager... Apr 24 04:35:28 localhost python[16005]: DIGEST-MD5 client mech dispose Apr 24 04:35:28 localhost python[16005]: DIGEST-MD5 common mech dispose Apr 24 04:35:28 localhost vdsmd_init_common.sh[17454]: vdsm: Running run_final_hooks Apr 24 04:35:28 localhost systemd[1]: Stopped Virtual Desktop Server Manager. -------- Thanks for the clarification. Looking at the description it is obvious something else than assumed in comment 3. But I do not see a heavy impact on functionality. Hi Ying, Below my investigation: During the configure of network we stop and start the following services: network, ntpd, ntpdate, rpcbind, nfslock, rpcidmapd, nfs-idmapd, rpcgssd In VDSM systemd file it contains: Requires=multipathd.service libvirtd.service time-sync.target \ iscsid.service rpcbind.service supervdsmd.service sanlock.service The key service in this case is rpcbind, the service will be stopped/started and during the stop action vdsm receives the signal of terminate as a required service is down [1]. This is not a exclusive behavior of RHEV-H, if you try the command in EL7 or Fedora platforms you should see the vdsm service down if you stop any required service. Anyway, in RHEV-H scenario, most of the cases, this report shoulnd't be a problem because during the deploy of host the host-deploy starts VDSM. However, I see a case where this report can be triggered and put the host down. Example: If host is deployed and UP, user want to change a network settings via TUI and vdsm will be down/host down after that. [1] Requires= Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated. http://www.freedesktop.org/software/systemd/man/systemd.unit.html Lowering the priority because there is no functional impact. The scenario that I have described in comment#6 won't happen because as soon the node get registered and approved the network changes via TUI shouldn't be possible. Additionally, host-deploy will start VDSM as soon the node got approved so this report won't have any impact. |