Bug 1179148
Summary: | vdsmd, libvirtd and network are not coming up after registration on RHEV-H 6.6 | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Fabian Deutsch <fdeutsch> |
Component: | ovirt-node | Assignee: | Fabian Deutsch <fdeutsch> |
Status: | CLOSED DUPLICATE | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | urgent | Docs Contact: | |
Priority: | medium | ||
Version: | 3.5.0 | CC: | bazulay, danken, dougsland, ecohen, gklein, iheim, lpeer, lsurette, lvernia, oourfali, ycui, yeylon |
Target Milestone: | --- | ||
Target Release: | 3.5.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | node | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-01-07 13:09:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Node | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1164308, 1164311 |
Description
Fabian Deutsch
2015-01-06 10:16:37 UTC
Setting this to medium, because others report that registration works fine. This can be fixed by doing the following after the registration: 1. dhclient 2. service libvirtd start 3. service vdsmd start (4. reboot) Now all files are persisted (or unpersisted) correctly. The trick is done by unpersisting ifcfg-eth0. Tested on Virt QE side: Scenario 1: -- can not reproduce this issue. Test steps: 1. Clean TUI installation successful. 2. login rhevh. 3. Navigate to RHEVM TUI in rhevh 4. Set Optional password for adding Node through RHEV-M 5. Go to RHEVM admin portal -> Host sheet 6. Add RHEVH 7. RHEVH is added successful. and _Up_ successful. # brctl show rhevm bridge name bridge id STP enabled interfaces rhevm 8000.18037339a69c no em1 Scenarion 2: -- can not reproduce this issue. Test steps: 1. Clean TUI installation successful. 2. login rhevh. 3. register rhevh to rhevm via rhevh TUI. 4. encounter known libvirtd bug 1179068, need to do workaround to start libvirtd manually. 5. re-register rhevh to rhevm via rhevh TUI. 6. register successful 7. Go to RHEVM admin portal 8. Approve this host successful. # brctl show rhevm bridge name bridge id STP enabled interfaces rhevm 8000.0024217fb719 no eth0 Above test version: # rpm -qa vdsm ovirt-node ovirt-node-plugin-vdsm kernel ovirt-node-3.1.0-0.39.20150105gitb784105.el6.noarch ovirt-node-plugin-vdsm-0.2.0-17.el6ev.noarch kernel-2.6.32-504.3.3.el6.x86_64 vdsm-4.16.8.1-4.el6ev.x86_64 # cat /etc/system-release Red Hat Enterprise Virtualization Hypervisor release 6.6 (20150105.0.el6ev) rhevm 3.5.0-0.27.el6ev (vt13.5) Hi, I have reproduced the report, I do believe it's a symptom of bz#1179068. (In reply to Douglas Schilling Landgraf from comment #4) > Hi, > > I have reproduced the report, I do believe it's a symptom of bz#1179068. If that's so, should it be closed as duplicate? Is that a node issue? Please update the whiteboard accordingly. (In reply to Oved Ourfali from comment #5) > (In reply to Douglas Schilling Landgraf from comment #4) > > Hi, > > > > I have reproduced the report, I do believe it's a symptom of bz#1179068. > > If that's so, should it be closed as duplicate? > Is that a node issue? > Please update the whiteboard accordingly. Hi Oved/Fabian, I do believe libvirtd down and vdsm not able to start can affect node behaviour. For now I will close this one as duplicate. *** This bug has been marked as a duplicate of bug 1179068 *** |