Bug 973511

Summary: [RHSC] adding a node in bb3 gives "Failed to configure manamgent network on the host"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RamaKasturi <knarra>
Component: rhscAssignee: Bala.FA <barumuga>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: urgent Docs Contact:
Priority: high    
Version: 2.1CC: dpati, dtsang, mmahoney, pprakash, rhs-bugs, sabose, ssampat
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: vdsm-4.10.2-22.6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 989876 (view as bug list) Environment:
Last Closed: 2013-09-23 22:25:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Attaching the sos report from engine
none
Super vdsm log none

Description RamaKasturi 2013-06-12 05:32:28 UTC
Created attachment 759957 [details]
Attaching the sos report from engine

Description of problem:
Adding a node in bb3 results in failure to configure management network.

Version-Release number of selected component (if applicable):
rhsc-2.1.0-0.bb3.el6rhs.noarch
glusterfs-3.4.0.9rhs-1.el6rhs.x86_64
vdsm-4.10.2-22.3.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Install the latest 2.1 ISO
2. Install glusterfs and update it with latest rpms.
3. Install vdsm.
4. Add the node to cluster through UI.

Note: Node should be built from scratch.

Actual results:
 Failed to configure management network on host asdf due to setup networks failure and the status of the node goes to non operational.

Expected results:
Node should get added sucessfully.

Additional info:

Comment 2 Prasanth 2013-06-21 09:27:45 UTC
This issue is seen in bb4 as well when everything is installed from scratch. Following messages are seen in the Events:

------
2013-Jun-21, 14:44 Detected new Host server1. Host state was set to NonOperational.
2013-Jun-21, 14:44 Could not get hardware information for host server1
2013-Jun-21, 14:44 Host server1 installation failed. Failed to configure manamgent network on the host.
2013-Jun-21, 14:44 Failed to configure management network on host server1 due to setup networks failure.
2013-Jun-21, 14:44 Installing Host server1. Stage: Termination.
-----

From vdsm.log:

-----------
Thread-15::DEBUG::2013-06-21 09:14:28,405::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call ping with () {} flowID [46d156b0]
Thread-15::DEBUG::2013-06-21 09:14:28,405::BindingXMLRPC::920::vds::(wrapper) return ping with {'status': {'message': 'Done', 'code': 0}}
Thread-16::DEBUG::2013-06-21 09:14:28,407::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call setupNetworks with ({'ovirtmgmt': {'nic': 'eth0', 'bootproto': 'dhcp', 'STP': 'no', 'bridged': 'true'}}, {}, {'connectivityCheck': 'true', 'connectivityTimeout': 120}) {} flowID [46d156b0]
Thread-16::ERROR::2013-06-21 09:14:28,408::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 367, in setupNetworks
    return api.setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/API.py", line 1215, in setupNetworks
    supervdsm.getProxy().setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 66, in <lambda>
    getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'ProxyCaller' object has no attribute 'setupNetworks'


Thread-18::DEBUG::2013-06-21 09:14:29,559::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call getHardwareInfo with () {}
Thread-18::ERROR::2013-06-21 09:14:29,559::API::1132::vds::(getHardwareInfo) failed to retrieve hardware info
Traceback (most recent call last):
  File "/usr/share/vdsm/API.py", line 1129, in getHardwareInfo
    hw = supervdsm.getProxy().getHardwareInfo()
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 66, in <lambda>
    getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'ProxyCaller' object has no attribute 'getHardwareInfo'
Thread-18::DEBUG::2013-06-21 09:14:29,560::BindingXMLRPC::920::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Failed to read hardware information', 'code': 57}}
storageRefresh::DEBUG::2013-06-21 09:14:29,771::supervdsm::179::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm
--------------

Comment 3 Sahina Bose 2013-07-11 10:19:08 UTC
This bug occured due to vdsm in the RHS ISO not being started due to errors. 
With the fix, vdsm restarts correctly and engine is able to bootstrap host and configure networks

Comment 4 RamaKasturi 2013-07-11 12:13:18 UTC
Created attachment 772200 [details]
Super vdsm log

Comment 5 Matt Mahoney 2013-07-17 17:15:20 UTC
Verified fixed in bb6.

Comment 6 Scott Haines 2013-09-23 22:25:47 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html