Description of problem:
Had a 6node RHGS 3.4 (glusterfs-3.12.2-7) cluster that was imported into RHGS-Console. All the nodes went to non-operational state, and a traceback was seen in vdsm logs.
From UI, after setting up the network manually and changing the boot protocol to 'dhcp', the node was successfully seen online.
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1575, in setupNetworks
supervdsm.getProxy().setupNetworks(networks, bondings, options)
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in <lambda>
File "<string>", line 2, in setupNetworks
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
raise convert_to_error(kind, result)
ConfigNetworkError: (10, 'connectivity check failed')
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Import a RHGS 3.4 node into RHGS-C
2. Wait for the installation to complete, and the node to show up green
Step2 fails, shows the node as non-operational. Boot protocol is defaulted to 'None'.
Node should show up as successfully installed, and the user should be able to manage the same.
Either we fix this, or we update our documents with additional step(s), once they update their setup to RHGS 3.4.
The vdsm rebase has introduced changes working with the networking.
Marking this a medium priority as there's a workaround available. Will need to investigate if the same behaviour is seen when adding node to RHV-M console. Gobinda, can you check this?
[Not yet marking it for rhgs 3.4]
Gobinda - Request for an update here.
(In reply to Atin Mukherjee from comment #5)
> Gobinda - Request for an update here.
I was looking into this but forgot to change the needinfo from Gobinda.
I've tested that RHGS 3.4 nodes can be added successfully without any manual steps to RHV-M. For new RHGS deployments, RHGS-C is not likely to be used (as there's a new WA) but RHV-M will be for integrated deployments. Since this flow works, taking this bug out of 3.4 target.
Can we close this?
RHGS-C is no longer under active maintenance. Customers are advised to use RHGS Web Administration Console to manage gluster deployments. Closing this