Bug 1367016

Summary: [RHEL 7.3] - Failed to add rhel7.3 host to rhv-m 4.0 - failed to configure management network on the host
Product: [oVirt] ovirt-engine Reporter: Michael Burman <mburman>
Component: BLL.NetworkAssignee: Edward Haas <edwardh>
Status: CLOSED WORKSFORME QA Contact: Michael Burman <mburman>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.0.2.6CC: bugs, cshao, danken, dguo, gklein, mburman, ycui, ykaul, ylavi
Target Milestone: ovirt-4.0.4Keywords: Regression
Target Release: ---Flags: ykaul: needinfo+
ylavi: ovirt-4.0.z?
ykaul: blocker+
ylavi: planning_ack+
mburman: devel_ack?
mburman: testing_ack?
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-26 13:52:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Network RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1252833, 1369887, 1377223    
Bug Blocks:    
Attachments:
Description Flags
Logs none

Description Michael Burman 2016-08-15 09:22:30 UTC
Created attachment 1190832 [details]
Logs

Description of problem:
Failed to add rhel7.3 host to rhv-m 4.0 - failed to configure management network on the host

Version-Release number of selected component (if applicable):
4.0.2.6-0.1.el7ev
vdsm-4.18.11-1.el7ev.x86_64
Red Hat Enterprise Linux Server release 7.3 Beta (Maipo)
kernel-3.10.0-481.el7.x86_64

Steps to Reproduce:
1. Try to add rhel7.3 host to rhv-m 4.0.2


Actual results:
Failed to configure management network on the host. Host was set to Non-operational state and ovirtmgmt is missing on host.

Expected results:
Should work

Comment 1 Red Hat Bugzilla Rules Engine 2016-08-17 08:46:02 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 2 Dan Kenigsberg 2016-08-24 09:05:28 UTC
from supervdsm.log I see no "pings" arriving between setupNetworks and timeout

MainProcess|jsonrpc.Executor/4::DEBUG::2016-08-15 12:08:41,219::api::246::root::(setupNetworks) Setting up network according to configuration: networks:{'ovirtmgmt': {'ipv6autoconf': False, 'bridged': 'true', 'nic': 'enp4s0', 'mtu': 1500, 'switch': 'ovs', 'dhcpv6': False, 'STP': 'no', 'hostQos': {'out': {'ls': {'m2': 50}}}, 'defaultRoute': True, 'bootproto': 'dhcp'}}, bondings:{}, options:{'connectivityCheck': 'true', 'connectivityTimeout': 120}
...
MainProcess|jsonrpc.Executor/4::ERROR::2016-08-15 12:10:43,043::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
Traceback (most recent call last):
...
ConfigNetworkError: (10, 'connectivity check failed')

Comment 3 Dan Kenigsberg 2016-08-31 08:23:02 UTC
Would you please retry that, but now disable and mask NM prior to adding the host to Engine? Call us when you have a live reproducer.

Comment 4 Michael Burman 2016-08-31 08:52:56 UTC
It is currently not possible, we are blocked by BZ - 1369887
We can't install vdsm on rhel7.3

I will retry once it will be possible.

Comment 5 Yaniv Lavi 2016-09-14 09:06:57 UTC
Are we planning to support 4.0.4 with 7.3? If so this is a blocker probably.

Comment 6 Yaniv Kaul 2016-09-14 09:30:41 UTC
(In reply to Yaniv Dary from comment #5)
> Are we planning to support 4.0.4 with 7.3? If so this is a blocker probably.

Indeed.

Comment 7 Gil Klein 2016-09-14 09:50:03 UTC
(In reply to Yaniv Kaul from comment #6)
> (In reply to Yaniv Dary from comment #5)
> > Are we planning to support 4.0.4 with 7.3? If so this is a blocker probably.
> 
> Indeed.
So I'm retargeting it back to 4.0.4

Comment 8 Michael Burman 2016-09-19 13:37:44 UTC
I can't reproduce it any more. add host works with latest rhel 7.3 beta(without masking and disabling the the NM)

Comment 9 Michael Burman 2016-09-26 13:52:40 UTC
This report didn't reproduced for a long time now. 
Closing as works for me at this point. Will re-open if the issue will pop up again.