Bug 1264269 - [7.2_3.5.z][node]vdsmd daemon status should be same on FC, ISCSI and local disk after rhevh install
[7.2_3.5.z][node]vdsmd daemon status should be same on FC, ISCSI and local di...
Status: CLOSED CANTFIX
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-node (Show other bugs)
3.5.4
Unspecified Unspecified
low Severity high
: ---
: ---
Assigned To: Fabian Deutsch
Virtualization Bugs
node
:
Depends On:
Blocks: 1264358
  Show dependency treegraph
 
Reported: 2015-09-18 01:28 EDT by Huijuan Zhao
Modified: 2016-02-10 15:07 EST (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-21 03:43:35 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Node
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
screenshot vdsmd status on FC machine (130.14 KB, image/png)
2015-09-18 01:28 EDT, Huijuan Zhao
no flags Details
screenshot vdsmd status on local disk machine (3.02 MB, image/jpeg)
2015-09-18 01:29 EDT, Huijuan Zhao
no flags Details
log on FC machine (6.33 MB, application/x-gzip)
2015-09-18 01:31 EDT, Huijuan Zhao
no flags Details
log on local machine (5.59 MB, application/x-gzip)
2015-09-18 01:32 EDT, Huijuan Zhao
no flags Details

  None (edit)
Description Huijuan Zhao 2015-09-18 01:28:03 EDT
Created attachment 1074678 [details]
screenshot vdsmd status on FC machine

Description of problem:
After TUI clean install RHEVH, check vdsmd daemon status, it is different on FC, ISCSI  and local disk machine.
vdsmd is running as default on FC machine, but is not running as default on ISCSI and local disk machine.

Version-Release number of selected component (if applicable):
rhev-hypervisor-7-7.2-20150913.0
ovirt-node-3.2.3-20.el7.noarch
vdsm-hook-vhostmd-4.16.26-1.el7ev.noarch
vdsm-xmlrpc-4.16.26-1.el7ev.noarch
vdsm-hook-ethtool-options-4.16.26-1.el7ev.noarch
vdsm-yajsonrpc-4.16.26-1.el7ev.noarch
vdsm-python-4.16.26-1.el7ev.noarch
vdsm-cli-4.16.26-1.el7ev.noarch
vdsm-4.16.26-1.el7ev.x86_64
vdsm-jsonrpc-4.16.26-1.el7ev.noarch
vdsm-reg-4.16.26-1.el7ev.noarch
vdsm-python-zombiereaper-4.16.26-1.el7ev.noarch

How reproducible:
100%
QA Whiteboard: node


Steps to Reproduce:
1. TUI clean install RHEV-H=
2. login rhevh, do not set network
3. F2 to shell
4. check vdsmd daemon status

Actual results:
after step4, vdsmd is running as default on FC machine,
but is not running as default on ISCSI and local disk machine.


Expected results:
after step4, vdsmd should running as default on FC, ISCSI and local disk machine.

Additional info:
Comment 1 Huijuan Zhao 2015-09-18 01:29:04 EDT
Created attachment 1074679 [details]
screenshot vdsmd status on local disk machine
Comment 2 Huijuan Zhao 2015-09-18 01:31:15 EDT
Created attachment 1074680 [details]
log on FC machine
Comment 3 Huijuan Zhao 2015-09-18 01:32:07 EDT
Created attachment 1074684 [details]
log on local machine
Comment 4 Fabian Deutsch 2015-09-18 05:50:42 EDT
I hesitate to fix this as long as there is now issue with this behavior.

Important is that vdsmd is running after registration.
Comment 5 Ying Cui 2015-09-18 06:50:10 EDT
(In reply to Fabian Deutsch from comment #4)
> I hesitate to fix this as long as there is now issue with this behavior.
> 
> Important is that vdsmd is running after registration.

We must be curious about rhevh such an inconsistent behavior, will this behavior will have any other impact which we can not deeply see?
Comment 6 Nir Soffer 2015-09-20 10:14:05 EDT
(In reply to Fabian Deutsch from comment #4)
> I hesitate to fix this as long as there is now issue with this behavior.

Fabian, can you explain why vdsm is running based on storage type?
Comment 7 Fabian Deutsch 2015-09-21 02:01:28 EDT
(In reply to Nir Soffer from comment #6)
> (In reply to Fabian Deutsch from comment #4)
> > I hesitate to fix this as long as there is now issue with this behavior.
> 
> Fabian, can you explain why vdsm is running based on storage type?

Th clarify the environment: vdsmd is sometimes running right after the installation of RHEV-H without a network configuration. It was not yet added to RHEV-M.

No, I can't explain right away why vdsm is sometimes running (whcih seems to be correlated to the storage method). But the point at which it is sometimes running (described above) is not functional relevant, because the host is not yet configured (from the vdsm point of view).

To me it is most important that vdsmd is running after the regisration/addition to RHEV-M.
Comment 8 Nir Soffer 2015-09-21 02:40:18 EDT
Checking the logs we see:

$ grep -a 'vdsmd_init_common.sh: One of the modules is not configured to work with VDSM' var-fc/log/messages | wc -l
177

[nsoffer@thin 1264269 (master)]$ grep -a 'vdsmd_init_common.sh: One of the modules is not configured to work with VDSM' var-local/log/messages | wc -l
5

So in both cases vdsm is stated before calling "vdsm-tool configure", and it
is not really much of "running".

This is wrong - there is no reason to start vdsm before it is configured,
it cannot run.

I think the best way to fix this is to disable vdsm service by default,
and enable it only after running "vdsm-tool configure".

Anyway there is no storage issue here, just incorrect installation.
vdsm does not touch storage before it is connected to engine and engine 
invokes connectStoageServer.
Comment 9 Fabian Deutsch 2015-09-21 03:43:35 EDT
(In reply to Nir Soffer from comment #8)
> Checking the logs we see:> 
> I think the best way to fix this is to disable vdsm service by default,
> and enable it only after running "vdsm-tool configure".

I agree this would be the best solution. But it is not possible on node to disable a service during runtime. A service can only be enabled or disabled at build time. That i why vdsm is enabled by default.

> Anyway there is no storage issue here, just incorrect installation.
> vdsm does not touch storage before it is connected to engine and engine 
> invokes connectStoageServer.

That is good to know. Thus I'm now closing this as can't fix.

Please note that this kind of bugs can be fixed in the future node.

Note You need to log in before you can comment on or make changes to this bug.