Created attachment 1548431 [details] Screenshot of peers not added. +++ This bug was initially created as a clone of Bug #1693144 +++ Description of problem: ------------------------ While expanding cluster,the deployment is successful but the peers are not probed and a different cluster is formed whereas it should be added to the existing cluster. Version-Release number of selected component: --------------------------------------------- rhvh-4.3.0.5-0.20190313 glusterfs-server-3.12.2-47.el7rhgs gluster-ansible-repositories-1.0-1.el7rhgs.noarch gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-features-1.0.4-5.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch gluster-ansible-roles-1.0.4-4.el7rhgs.noarch gluster-ansible-infra-1.0.3-3.el7rhgs.noarch How reproducible: ------------------ 3/3 Steps to Reproduce: ---------------------- 1.After the successful gluster deployment log in to cockpit UI and click on hosted engine . 2.Start with expanding cluster. 3.User will have a successful deployment but when user will issue the command " gluster peer status " , the peers will not be connected. Actual results: ----------------- The additional machines/peers form a seperate cluster. Expected results: ------------------- The peers which are used to expand the cluster should be a part of existing cluster and should not for a different cluster . --- Additional comment from Mugdha Soni on 2019-03-27 09:01 UTC ---
Created attachment 1548432 [details] Screenshot of different cluster formed.
Tested with cockpit-ovirt-dashboard-0.13.7 Expand cluster operation from day2 operation resulted in addition of 3 nodes to the existing cluster of 3 nodes
This bugzilla is included in oVirt 4.3.6 release, published on September 26th 2019. Since the problem described in this bug report should be resolved in oVirt 4.3.6 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.