Description of problem: After registering RHEV-H 6.6 to RHEV-M 3.5, vdsmd does not come up, because libvirtd does not come up. libvirtd does not come up, because the networking is not brought up correctly. The problem seems to be that the ifcfg-eth0 configuration is expecting ifcfg-rhevm (bridge), which is missing, because it should be brought up by vdsm (which is started after networking). Described differently: Boot order: a ovirt-early (persistence stuff) b networking c libvirtd d vdsm a restores ifcfg-eth0 b ifcfg-eth0 requires ifcfg-rhevm, but ifcfg-rhevm does not exist and is not bind mounted in a c fails, because it requires at least one configured interface d fails, because libvirtd is not up Version-Release number of selected component (if applicable): vdsm-4.16.8.1-4.el6ev.x86_64 How reproducible: sometimes Steps to Reproduce: 1. Install RHEV-H 0105 2. Register RHEV-H from RHEV-M side (add host) 3. Actual results: vdsmd does not come up, unresponsive in web ui Expected results: vdsmd comes up and host is up in webui Additional info:
Setting this to medium, because others report that registration works fine.
This can be fixed by doing the following after the registration: 1. dhclient 2. service libvirtd start 3. service vdsmd start (4. reboot) Now all files are persisted (or unpersisted) correctly. The trick is done by unpersisting ifcfg-eth0.
Tested on Virt QE side: Scenario 1: -- can not reproduce this issue. Test steps: 1. Clean TUI installation successful. 2. login rhevh. 3. Navigate to RHEVM TUI in rhevh 4. Set Optional password for adding Node through RHEV-M 5. Go to RHEVM admin portal -> Host sheet 6. Add RHEVH 7. RHEVH is added successful. and _Up_ successful. # brctl show rhevm bridge name bridge id STP enabled interfaces rhevm 8000.18037339a69c no em1 Scenarion 2: -- can not reproduce this issue. Test steps: 1. Clean TUI installation successful. 2. login rhevh. 3. register rhevh to rhevm via rhevh TUI. 4. encounter known libvirtd bug 1179068, need to do workaround to start libvirtd manually. 5. re-register rhevh to rhevm via rhevh TUI. 6. register successful 7. Go to RHEVM admin portal 8. Approve this host successful. # brctl show rhevm bridge name bridge id STP enabled interfaces rhevm 8000.0024217fb719 no eth0 Above test version: # rpm -qa vdsm ovirt-node ovirt-node-plugin-vdsm kernel ovirt-node-3.1.0-0.39.20150105gitb784105.el6.noarch ovirt-node-plugin-vdsm-0.2.0-17.el6ev.noarch kernel-2.6.32-504.3.3.el6.x86_64 vdsm-4.16.8.1-4.el6ev.x86_64 # cat /etc/system-release Red Hat Enterprise Virtualization Hypervisor release 6.6 (20150105.0.el6ev) rhevm 3.5.0-0.27.el6ev (vt13.5)
Hi, I have reproduced the report, I do believe it's a symptom of bz#1179068.
(In reply to Douglas Schilling Landgraf from comment #4) > Hi, > > I have reproduced the report, I do believe it's a symptom of bz#1179068. If that's so, should it be closed as duplicate? Is that a node issue? Please update the whiteboard accordingly.
(In reply to Oved Ourfali from comment #5) > (In reply to Douglas Schilling Landgraf from comment #4) > > Hi, > > > > I have reproduced the report, I do believe it's a symptom of bz#1179068. > > If that's so, should it be closed as duplicate? > Is that a node issue? > Please update the whiteboard accordingly. Hi Oved/Fabian, I do believe libvirtd down and vdsm not able to start can affect node behaviour. For now I will close this one as duplicate. *** This bug has been marked as a duplicate of bug 1179068 ***