+++ This bug was initially created as a clone of Bug #1597256 +++ Description of problem: ----------------------- Gluster management dashboard in cockpit lists the volume with more number of bricks rather than available Version-Release number of selected component (if applicable): -------------------------------------------------------------- cockpit-ovirt-dashboard-0.11.28.1-1 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Expand the cluster of 3 nodes with 3 more nodes using 'expand cluster' feature in the cockpit 2. Create volumes as part of expand cluster operation 3. Refresh the cockpit 'Gluster Management' dashboard, look for the last volumes configuration Actual results: --------------- One of the volumes has more bricks than actually present Expected results: ----------------- There should exact number of bricks that should be shown in gluster management UI --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-07-02 07:55:41 EDT --- This bug is automatically being proposed for the current release of Red Hat Hyperconverged Infrastructure (RHHI) under active development, by setting the release flag 'rhhi‑2.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag.
Here is the brick list as seen in the gluster CLI command # gluster volume info data Volume Name: data Type: Replicate Volume ID: f7e2fa57-059f-4e39-9ef1-13cda6a54a2f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.79:/gluster_bricks/data/data Brick2: 10.70.36.80:/gluster_bricks/data/data Brick3: 10.70.36.81:/gluster_bricks/data/data Options Reconfigured: nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable
Created attachment 1455949 [details] Screenshot of cockpit showing incorrect number of bricks with gluster management
As this fix is in part of https://gerrit.ovirt.org/#/c/92786/ , so targeting to 4.2.5
Tested the following in :- (1) rhvh-4.2.5.0-0.20180710 (2) gdeploy-2.0.2-27.el7rhgs.noarch (3) ansible-2.6.1-1.el7ae.noarch Following were the steps performed :- (1)Expanded the cluster of 3 nodes with 3 more nodes using 'expand cluster' feature in the cockpit. (2)Created volume as a part of expanded cluster. (3)Refreshed the cockpit 'gluster management" dashboard, no extra bricks were found . Hence moving the bug to verified state.
Created attachment 1459673 [details] screenshot of bricks present
This bugzilla is included in oVirt 4.2.5 release, published on July 30th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.