Bug 1264269
Summary: | [7.2_3.5.z][node]vdsmd daemon status should be same on FC, ISCSI and local disk after rhevh install | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Huijuan Zhao <huzhao> | ||||||||||
Component: | ovirt-node | Assignee: | Fabian Deutsch <fdeutsch> | ||||||||||
Status: | CLOSED CANTFIX | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||||||
Severity: | high | Docs Contact: | |||||||||||
Priority: | low | ||||||||||||
Version: | 3.5.4 | CC: | amureini, cshao, cwu, ecohen, fdeutsch, gklein, huiwa, leiwang, lsurette, lyi, nsoffer, yaniwang, ycui | ||||||||||
Target Milestone: | --- | ||||||||||||
Target Release: | --- | ||||||||||||
Hardware: | Unspecified | ||||||||||||
OS: | Unspecified | ||||||||||||
Whiteboard: | node | ||||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||||
Doc Text: | Story Points: | --- | |||||||||||
Clone Of: | Environment: | ||||||||||||
Last Closed: | 2015-09-21 07:43:35 UTC | Type: | Bug | ||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||
Documentation: | --- | CRM: | |||||||||||
Verified Versions: | Category: | --- | |||||||||||
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
Embargoed: | |||||||||||||
Bug Depends On: | |||||||||||||
Bug Blocks: | 1264358 | ||||||||||||
Attachments: |
|
Created attachment 1074679 [details]
screenshot vdsmd status on local disk machine
Created attachment 1074680 [details]
log on FC machine
Created attachment 1074684 [details]
log on local machine
I hesitate to fix this as long as there is now issue with this behavior. Important is that vdsmd is running after registration. (In reply to Fabian Deutsch from comment #4) > I hesitate to fix this as long as there is now issue with this behavior. > > Important is that vdsmd is running after registration. We must be curious about rhevh such an inconsistent behavior, will this behavior will have any other impact which we can not deeply see? (In reply to Fabian Deutsch from comment #4) > I hesitate to fix this as long as there is now issue with this behavior. Fabian, can you explain why vdsm is running based on storage type? (In reply to Nir Soffer from comment #6) > (In reply to Fabian Deutsch from comment #4) > > I hesitate to fix this as long as there is now issue with this behavior. > > Fabian, can you explain why vdsm is running based on storage type? Th clarify the environment: vdsmd is sometimes running right after the installation of RHEV-H without a network configuration. It was not yet added to RHEV-M. No, I can't explain right away why vdsm is sometimes running (whcih seems to be correlated to the storage method). But the point at which it is sometimes running (described above) is not functional relevant, because the host is not yet configured (from the vdsm point of view). To me it is most important that vdsmd is running after the regisration/addition to RHEV-M. Checking the logs we see: $ grep -a 'vdsmd_init_common.sh: One of the modules is not configured to work with VDSM' var-fc/log/messages | wc -l 177 [nsoffer@thin 1264269 (master)]$ grep -a 'vdsmd_init_common.sh: One of the modules is not configured to work with VDSM' var-local/log/messages | wc -l 5 So in both cases vdsm is stated before calling "vdsm-tool configure", and it is not really much of "running". This is wrong - there is no reason to start vdsm before it is configured, it cannot run. I think the best way to fix this is to disable vdsm service by default, and enable it only after running "vdsm-tool configure". Anyway there is no storage issue here, just incorrect installation. vdsm does not touch storage before it is connected to engine and engine invokes connectStoageServer. (In reply to Nir Soffer from comment #8) > Checking the logs we see: … > > I think the best way to fix this is to disable vdsm service by default, > and enable it only after running "vdsm-tool configure". I agree this would be the best solution. But it is not possible on node to disable a service during runtime. A service can only be enabled or disabled at build time. That i why vdsm is enabled by default. > Anyway there is no storage issue here, just incorrect installation. > vdsm does not touch storage before it is connected to engine and engine > invokes connectStoageServer. That is good to know. Thus I'm now closing this as can't fix. Please note that this kind of bugs can be fixed in the future node. |
Created attachment 1074678 [details] screenshot vdsmd status on FC machine Description of problem: After TUI clean install RHEVH, check vdsmd daemon status, it is different on FC, ISCSI and local disk machine. vdsmd is running as default on FC machine, but is not running as default on ISCSI and local disk machine. Version-Release number of selected component (if applicable): rhev-hypervisor-7-7.2-20150913.0 ovirt-node-3.2.3-20.el7.noarch vdsm-hook-vhostmd-4.16.26-1.el7ev.noarch vdsm-xmlrpc-4.16.26-1.el7ev.noarch vdsm-hook-ethtool-options-4.16.26-1.el7ev.noarch vdsm-yajsonrpc-4.16.26-1.el7ev.noarch vdsm-python-4.16.26-1.el7ev.noarch vdsm-cli-4.16.26-1.el7ev.noarch vdsm-4.16.26-1.el7ev.x86_64 vdsm-jsonrpc-4.16.26-1.el7ev.noarch vdsm-reg-4.16.26-1.el7ev.noarch vdsm-python-zombiereaper-4.16.26-1.el7ev.noarch How reproducible: 100% QA Whiteboard: node Steps to Reproduce: 1. TUI clean install RHEV-H= 2. login rhevh, do not set network 3. F2 to shell 4. check vdsmd daemon status Actual results: after step4, vdsmd is running as default on FC machine, but is not running as default on ISCSI and local disk machine. Expected results: after step4, vdsmd should running as default on FC, ISCSI and local disk machine. Additional info: