Bug 973511 - [RHSC] adding a node in bb3 gives "Failed to configure manamgent network on the host"
[RHSC] adding a node in bb3 gives "Failed to configure manamgent network on ...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
high Severity urgent
: ---
: ---
Assigned To: Bala.FA
RamaKasturi
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-12 01:32 EDT by RamaKasturi
Modified: 2015-11-22 21:57 EST (History)
7 users (show)

See Also:
Fixed In Version: vdsm-4.10.2-22.6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 989876 (view as bug list)
Environment:
Last Closed: 2013-09-23 18:25:47 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Attaching the sos report from engine (9.70 MB, text/x-log)
2013-06-12 01:32 EDT, RamaKasturi
no flags Details
Super vdsm log (120.98 KB, text/x-log)
2013-07-11 08:13 EDT, RamaKasturi
no flags Details

  None (edit)
Description RamaKasturi 2013-06-12 01:32:28 EDT
Created attachment 759957 [details]
Attaching the sos report from engine

Description of problem:
Adding a node in bb3 results in failure to configure management network.

Version-Release number of selected component (if applicable):
rhsc-2.1.0-0.bb3.el6rhs.noarch
glusterfs-3.4.0.9rhs-1.el6rhs.x86_64
vdsm-4.10.2-22.3.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Install the latest 2.1 ISO
2. Install glusterfs and update it with latest rpms.
3. Install vdsm.
4. Add the node to cluster through UI.

Note: Node should be built from scratch.

Actual results:
 Failed to configure management network on host asdf due to setup networks failure and the status of the node goes to non operational.

Expected results:
Node should get added sucessfully.

Additional info:
Comment 2 Prasanth 2013-06-21 05:27:45 EDT
This issue is seen in bb4 as well when everything is installed from scratch. Following messages are seen in the Events:

------
2013-Jun-21, 14:44 Detected new Host server1. Host state was set to NonOperational.
2013-Jun-21, 14:44 Could not get hardware information for host server1
2013-Jun-21, 14:44 Host server1 installation failed. Failed to configure manamgent network on the host.
2013-Jun-21, 14:44 Failed to configure management network on host server1 due to setup networks failure.
2013-Jun-21, 14:44 Installing Host server1. Stage: Termination.
-----

From vdsm.log:

-----------
Thread-15::DEBUG::2013-06-21 09:14:28,405::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call ping with () {} flowID [46d156b0]
Thread-15::DEBUG::2013-06-21 09:14:28,405::BindingXMLRPC::920::vds::(wrapper) return ping with {'status': {'message': 'Done', 'code': 0}}
Thread-16::DEBUG::2013-06-21 09:14:28,407::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call setupNetworks with ({'ovirtmgmt': {'nic': 'eth0', 'bootproto': 'dhcp', 'STP': 'no', 'bridged': 'true'}}, {}, {'connectivityCheck': 'true', 'connectivityTimeout': 120}) {} flowID [46d156b0]
Thread-16::ERROR::2013-06-21 09:14:28,408::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 367, in setupNetworks
    return api.setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/API.py", line 1215, in setupNetworks
    supervdsm.getProxy().setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 66, in <lambda>
    getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'ProxyCaller' object has no attribute 'setupNetworks'


Thread-18::DEBUG::2013-06-21 09:14:29,559::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call getHardwareInfo with () {}
Thread-18::ERROR::2013-06-21 09:14:29,559::API::1132::vds::(getHardwareInfo) failed to retrieve hardware info
Traceback (most recent call last):
  File "/usr/share/vdsm/API.py", line 1129, in getHardwareInfo
    hw = supervdsm.getProxy().getHardwareInfo()
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 66, in <lambda>
    getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'ProxyCaller' object has no attribute 'getHardwareInfo'
Thread-18::DEBUG::2013-06-21 09:14:29,560::BindingXMLRPC::920::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Failed to read hardware information', 'code': 57}}
storageRefresh::DEBUG::2013-06-21 09:14:29,771::supervdsm::179::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm
--------------
Comment 3 Sahina Bose 2013-07-11 06:19:08 EDT
This bug occured due to vdsm in the RHS ISO not being started due to errors. 
With the fix, vdsm restarts correctly and engine is able to bootstrap host and configure networks
Comment 4 RamaKasturi 2013-07-11 08:13:18 EDT
Created attachment 772200 [details]
Super vdsm log
Comment 5 Matt Mahoney 2013-07-17 13:15:20 EDT
Verified fixed in bb6.
Comment 6 Scott Haines 2013-09-23 18:25:47 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Note You need to log in before you can comment on or make changes to this bug.