Bug 1052314
| Summary: | [RHEVM] cannot put 3.3 host into maintenance after failed addition to 3.4 cluster | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Martin Pavlik <mpavlik> | ||||
| Component: | ovirt-engine | Assignee: | Liran Zelkha <lzelkha> | ||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Tareq Alayan <talayan> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.4.0 | CC: | acathrow, bazulay, emesika, gklein, iheim, lpeer, lzelkha, mpavlik, Rhev-m-bugs, sbonazzo, talayan, yeylon | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 3.4.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | infra | ||||||
| Fixed In Version: | ovirt-3.4.0-beta3 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | Type: | Bug | |||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Martin Pavlik
2014-01-13 15:45:45 UTC
Can you describe which networks exist for the host? Also, can you send engine and vdsm host information (ssh username/password)? I used clean RHEL 6.5 with 3.3 vdsm repo enabled
Installed it via rhevm 3.4 into 3.4 Data Center / Cluster
at the end host was non operational, in setup networks host appears as it has no ovirtmgmt assigned to any interface (see screenshot)
however on host it seems to be configured OK
root@localhost ~]# ip a l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1a:4a:9c:6e:92 brd ff:ff:ff:ff:ff:ff
4: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 4a:7b:d3:28:34:f9 brd ff:ff:ff:ff:ff:ff
5: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: bond4: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
7: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
8: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
9: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
10: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 00:1a:4a:9c:6e:92 brd ff:ff:ff:ff:ff:ff
inet 10.34.66.2/24 brd 10.34.66.255 scope global ovirtmgmt
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
# Generated by VDSM version 4.13.2-0.6.el6ev
DEVICE=eth0
ONBOOT=yes
HWADDR=00:1a:4a:9c:6e:92
BRIDGE=ovirtmgmt
NM_CONTROLLED=no
STP=no
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
# Generated by VDSM version 4.13.2-0.6.el6ev
DEVICE=ovirtmgmt
ONBOOT=yes
TYPE=Bridge
DELAY=0
BOOTPROTO=dhcp
DEFROUTE=no
NM_CONTROLLED=no
STP=no
Created attachment 850022 [details]
screenshot 1
Hi Martin - is it ok if I restart the engine so I can connect to it with a debugger? I can see the error, but can't understand what's causing it. (In reply to Liran Zelkha from comment #6) > Hi Martin - is it ok if I restart the engine so I can connect to it with a > debugger? I can see the error, but can't understand what's causing it. yes, you can restart, go ahead please Thanks. I connected, found the bug, and disconnected. You'll see the patch soon. 1- Added 3.3 host to 3.4 Datacenter/cluster 2- Host stuck on initializing for ~3minutes then became non-opertional 3. successfully put into Maintenance verified ovirt-engine-3.4.0-0.11.beta3.el6.noarch Closing as part of 3.4.0 |