Bug 1693144

Summary: [Day 2] Peers are not probed during expand cluster.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Mugdha Soni <musoni>
Component: rhhiAssignee: Sahina Bose <sabose>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: high    
Version: rhhiv-1.5CC: godas, pasik, pprakash, rhs-bugs, sasundar
Target Milestone: ---Keywords: ZStream
Target Release: RHHI-V 1.6.z Async Update   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, when more nodes were added to a hyperconverged cluster using 'Expand cluster' in the Web Console, the new nodes were not added to the trusted storage pool, and could not be immediately used for storage. New nodes are now added to the trusted storage pool when 'Expand cluster' is used in the Web Console.
Story Points: ---
Clone Of:
: 1693149 (view as bug list) Environment:
Last Closed: 2019-10-03 12:23:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1693149    
Bug Blocks:    
Attachments:
Description Flags
Screenshot of peers not added.
none
Screenshot of different cluster formed. none

Description Mugdha Soni 2019-03-27 08:59:25 UTC
Created attachment 1548423 [details]
Screenshot of peers not added.

Description of problem:
------------------------
While expanding cluster,the deployment is successful but the peers are not probed and a different cluster is formed whereas it should be added to the existing cluster. 

Version-Release number of selected component:
---------------------------------------------
rhvh-4.3.0.5-0.20190313
glusterfs-server-3.12.2-47.el7rhgs

gluster-ansible-repositories-1.0-1.el7rhgs.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-features-1.0.4-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-roles-1.0.4-4.el7rhgs.noarch
gluster-ansible-infra-1.0.3-3.el7rhgs.noarch


How reproducible:
------------------
3/3

Steps to Reproduce:
----------------------
1.After the successful gluster deployment log in to cockpit UI and click on hosted engine .
2.Start with expanding cluster.
3.User will have a successful deployment but when user will issue the command " gluster peer status " , the peers will not be connected.

Actual results:
-----------------
The additional machines/peers form a seperate cluster.

Expected results:
-------------------
The peers which are used to expand the cluster should be a part of existing 
cluster and should not for a different cluster .

Comment 1 Mugdha Soni 2019-03-27 09:01:43 UTC
Created attachment 1548428 [details]
Screenshot of different cluster formed.

Comment 6 SATHEESARAN 2019-08-06 07:11:11 UTC
I see the upstream patch[1] for the issue is posted

[1] - https://gerrit.ovirt.org/99031

Comment 7 SATHEESARAN 2019-09-04 02:20:03 UTC
Tested with cockpit-ovirt-dashboard-0.13.7

Expand cluster operation from day2 operation resulted in addition of 3 nodes
to the existing cluster of 3 nodes

Comment 9 errata-xmlrpc 2019-10-03 12:23:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2963