Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 973511 - [RHSC] adding a node in bb3 gives "Failed to configure manamgent network on the host"
Summary: [RHSC] adding a node in bb3 gives "Failed to configure manamgent network on ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: ---
Assignee: Bala.FA
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-06-12 05:32 UTC by RamaKasturi
Modified: 2015-11-23 02:57 UTC (History)
7 users (show)

Fixed In Version: vdsm-4.10.2-22.6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 989876 (view as bug list)
Environment:
Last Closed: 2013-09-23 22:25:47 UTC
Target Upstream Version:


Attachments (Terms of Use)
Attaching the sos report from engine (9.70 MB, text/x-log)
2013-06-12 05:32 UTC, RamaKasturi
no flags Details
Super vdsm log (120.98 KB, text/x-log)
2013-07-11 12:13 UTC, RamaKasturi
no flags Details

Description RamaKasturi 2013-06-12 05:32:28 UTC
Created attachment 759957 [details]
Attaching the sos report from engine

Description of problem:
Adding a node in bb3 results in failure to configure management network.

Version-Release number of selected component (if applicable):
rhsc-2.1.0-0.bb3.el6rhs.noarch
glusterfs-3.4.0.9rhs-1.el6rhs.x86_64
vdsm-4.10.2-22.3.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Install the latest 2.1 ISO
2. Install glusterfs and update it with latest rpms.
3. Install vdsm.
4. Add the node to cluster through UI.

Note: Node should be built from scratch.

Actual results:
 Failed to configure management network on host asdf due to setup networks failure and the status of the node goes to non operational.

Expected results:
Node should get added sucessfully.

Additional info:

Comment 2 Prasanth 2013-06-21 09:27:45 UTC
This issue is seen in bb4 as well when everything is installed from scratch. Following messages are seen in the Events:

------
2013-Jun-21, 14:44 Detected new Host server1. Host state was set to NonOperational.
2013-Jun-21, 14:44 Could not get hardware information for host server1
2013-Jun-21, 14:44 Host server1 installation failed. Failed to configure manamgent network on the host.
2013-Jun-21, 14:44 Failed to configure management network on host server1 due to setup networks failure.
2013-Jun-21, 14:44 Installing Host server1. Stage: Termination.
-----

From vdsm.log:

-----------
Thread-15::DEBUG::2013-06-21 09:14:28,405::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call ping with () {} flowID [46d156b0]
Thread-15::DEBUG::2013-06-21 09:14:28,405::BindingXMLRPC::920::vds::(wrapper) return ping with {'status': {'message': 'Done', 'code': 0}}
Thread-16::DEBUG::2013-06-21 09:14:28,407::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call setupNetworks with ({'ovirtmgmt': {'nic': 'eth0', 'bootproto': 'dhcp', 'STP': 'no', 'bridged': 'true'}}, {}, {'connectivityCheck': 'true', 'connectivityTimeout': 120}) {} flowID [46d156b0]
Thread-16::ERROR::2013-06-21 09:14:28,408::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 367, in setupNetworks
    return api.setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/API.py", line 1215, in setupNetworks
    supervdsm.getProxy().setupNetworks(networks, bondings, options)
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 66, in <lambda>
    getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'ProxyCaller' object has no attribute 'setupNetworks'


Thread-18::DEBUG::2013-06-21 09:14:29,559::BindingXMLRPC::913::vds::(wrapper) client [10.70.36.27]::call getHardwareInfo with () {}
Thread-18::ERROR::2013-06-21 09:14:29,559::API::1132::vds::(getHardwareInfo) failed to retrieve hardware info
Traceback (most recent call last):
  File "/usr/share/vdsm/API.py", line 1129, in getHardwareInfo
    hw = supervdsm.getProxy().getHardwareInfo()
  File "/usr/share/vdsm/supervdsm.py", line 76, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 66, in <lambda>
    getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'ProxyCaller' object has no attribute 'getHardwareInfo'
Thread-18::DEBUG::2013-06-21 09:14:29,560::BindingXMLRPC::920::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Failed to read hardware information', 'code': 57}}
storageRefresh::DEBUG::2013-06-21 09:14:29,771::supervdsm::179::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm
--------------

Comment 3 Sahina Bose 2013-07-11 10:19:08 UTC
This bug occured due to vdsm in the RHS ISO not being started due to errors. 
With the fix, vdsm restarts correctly and engine is able to bootstrap host and configure networks

Comment 4 RamaKasturi 2013-07-11 12:13:18 UTC
Created attachment 772200 [details]
Super vdsm log

Comment 5 Matt Mahoney 2013-07-17 17:15:20 UTC
Verified fixed in bb6.

Comment 6 Scott Haines 2013-09-23 22:25:47 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.