Bug 1693144 - [Day 2] Peers are not probed during expand cluster.
Summary: [Day 2] Peers are not probed during expand cluster.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHHI-V 1.6.z Async Update
Assignee: Sahina Bose
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1693149
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-27 08:59 UTC by Mugdha Soni
Modified: 2020-11-18 04:16 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, when more nodes were added to a hyperconverged cluster using 'Expand cluster' in the Web Console, the new nodes were not added to the trusted storage pool, and could not be immediately used for storage. New nodes are now added to the trusted storage pool when 'Expand cluster' is used in the Web Console.
Clone Of:
: 1693149 (view as bug list)
Environment:
Last Closed: 2019-10-03 12:23:57 UTC
Embargoed:


Attachments (Terms of Use)
Screenshot of peers not added. (254.69 KB, image/png)
2019-03-27 08:59 UTC, Mugdha Soni
no flags Details
Screenshot of different cluster formed. (309.19 KB, image/png)
2019-03-27 09:01 UTC, Mugdha Soni
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2963 0 None None None 2019-10-03 12:24:05 UTC

Description Mugdha Soni 2019-03-27 08:59:25 UTC
Created attachment 1548423 [details]
Screenshot of peers not added.

Description of problem:
------------------------
While expanding cluster,the deployment is successful but the peers are not probed and a different cluster is formed whereas it should be added to the existing cluster. 

Version-Release number of selected component:
---------------------------------------------
rhvh-4.3.0.5-0.20190313
glusterfs-server-3.12.2-47.el7rhgs

gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch


How reproducible:
------------------
3/3

Steps to Reproduce:
----------------------
1.After the successful gluster deployment log in to cockpit UI and click on hosted engine .
2.Start with expanding cluster.
3.User will have a successful deployment but when user will issue the command " gluster peer status " , the peers will not be connected.

Actual results:
-----------------
The additional machines/peers form a seperate cluster.

Expected results:
-------------------
The peers which are used to expand the cluster should be a part of existing 
cluster and should not for a different cluster .

Comment 1 Mugdha Soni 2019-03-27 09:01:43 UTC
Created attachment 1548428 [details]
Screenshot of different cluster formed.

Comment 6 SATHEESARAN 2019-08-06 07:11:11 UTC
I see the upstream patch[1] for the issue is posted

[1] - https://gerrit.ovirt.org/99031

Comment 7 SATHEESARAN 2019-09-04 02:20:03 UTC
Tested with cockpit-ovirt-dashboard-0.13.7

Expand cluster operation from day2 operation resulted in addition of 3 nodes
to the existing cluster of 3 nodes

Comment 9 errata-xmlrpc 2019-10-03 12:23:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2963


Note You need to log in before you can comment on or make changes to this bug.