Bug 1565510 - Need to setup network manually for a cluster imported into RHGS-Console
Summary: Need to setup network manually for a cluster imported into RHGS-Console
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: vdsm
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-10 07:54 UTC by Sweta Anandpara
Modified: 2019-04-22 06:29 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Cause: ovirtmgmt network is not assigned on the interface of host Consequence: Host addition fails with "connectivity check failed" error Workaround (if any): Edit the "Setup Host networks" under Network sub-tab of Host, and assign the ovirtmgmt network to the host interface and save Result: Host connectivity is established and host is online
Clone Of:
Environment:
Last Closed: 2019-04-22 06:29:46 UTC
Embargoed:


Attachments (Terms of Use)

Description Sweta Anandpara 2018-04-10 07:54:01 UTC
Description of problem:
========================
Had a 6node RHGS 3.4 (glusterfs-3.12.2-7) cluster that was imported into RHGS-Console. All the nodes went to non-operational state, and a traceback was seen in vdsm logs. 

From UI, after setting up the network manually and changing the boot protocol to 'dhcp', the node was successfully seen online.

 Traceback (most recent call last):
   File "/usr/share/vdsm/API.py", line 1575, in setupNetworks
     supervdsm.getProxy().setupNetworks(networks, bondings, options)
   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__
     return callMethod()
   File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in <lambda>
     **kwargs)
   File "<string>", line 2, in setupNetworks
   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
     raise convert_to_error(kind, result)
 ConfigNetworkError: (10, 'connectivity check failed')



Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.12.2-7.el7rhgs.x86_64
vdsm-4.19.43-2.3.el7rhgs.x86_64


How reproducible:
=================
Always


Steps to Reproduce:
====================
1. Import a RHGS 3.4 node into RHGS-C
2. Wait for the installation to complete, and the node to show up green

Actual results:
===============
Step2 fails, shows the node as non-operational. Boot protocol is defaulted to 'None'.


Expected results:
==================
Node should show up as successfully installed, and the user should be able to manage the same.


Additional info:
================
Either we fix this, or we update our documents with additional step(s), once they update their setup to RHGS 3.4.

Comment 4 Sahina Bose 2018-04-17 09:08:51 UTC
The vdsm rebase has introduced changes working with the networking. 
Marking this a medium priority as there's a workaround available. Will need to investigate if the same behaviour is seen when adding node to RHV-M console. Gobinda, can you check this?

[Not yet marking it for rhgs 3.4]

Comment 5 Atin Mukherjee 2018-04-26 14:32:46 UTC
Gobinda - Request for an update here.

Comment 6 Sahina Bose 2018-04-26 14:54:49 UTC
(In reply to Atin Mukherjee from comment #5)
> Gobinda - Request for an update here.

I was looking into this but forgot to change the needinfo from Gobinda.

I've tested that RHGS 3.4 nodes can be added successfully without any manual steps to RHV-M. For new RHGS deployments, RHGS-C is not likely to be used (as there's a new WA) but RHV-M will be for integrated deployments. Since this flow works, taking this bug out of 3.4 target.

Comment 8 Yaniv Kaul 2019-04-17 07:32:14 UTC
Can we close this?

Comment 9 Sahina Bose 2019-04-22 06:29:46 UTC
RHGS-C is no longer under active maintenance. Customers are advised to use RHGS Web Administration Console to manage gluster deployments. Closing this


Note You need to log in before you can comment on or make changes to this bug.