Bug 1514785

Summary: Total brick count information is not shown accurately on the dashboard.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bala Konda Reddy M <bmekala>
Component: web-admin-tendrl-gluster-integrationAssignee: Nishanth Thomas <nthomas>
Status: CLOSED ERRATA QA Contact: Rochelle <rallan>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: ppenicka, rallan, rhinduja, rhs-bugs, sankarshan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-gluster-integration-1.5.4-4.el7rhgs.noarch.rpm Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-18 04:37:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
In the bricks widget it is showing the brick count as 4
none
In hosts the brick count is shown as zero, but the bricks in the hosts are up and running
none
Brick count reflected correctly on the Grafana dashboard none

Description Bala Konda Reddy M 2017-11-18 15:10:50 UTC
Created attachment 1354749 [details]
In the bricks widget it is showing the brick count as 4

Description of problem:
Created a gluster cluster and imported successfully to webadmin. Created a replica 2 volume with 6 bricks. On the webadmin dashboard it is showing only 4 bricks. It should show all 6 bricks.

Please find the attachment helpful

Version-Release number of selected component (if applicable):
tendrl-gluster-integration-1.5.4-2

How reproducible:
1:1

Steps to Reproduce:
1. Created a gluster cluster and imported successfully to webadmin.
2. Created a replica 2 volume with 6 bricks across 3 nodes.
3. On the dashboard the brick count is shown as 4, it should show 6 as count

Actual results:
Brick information is not accurate with respect to the number of bricks

Expected results:
Brick information should be accurate and should show exact number of bricks in the volume 


Additional info:

Comment 2 Bala Konda Reddy M 2017-11-18 15:20:10 UTC
Created attachment 1354750 [details]
In hosts the brick count is shown as zero, but the bricks in the hosts are up and running

Comment 4 Petr Penicka 2017-11-20 13:26:35 UTC
Giving pm_ack and 3.3.z+ since both qa_ack and dev_ack are already given.

Comment 6 Rochelle 2017-11-22 08:51:35 UTC
Created attachment 1357297 [details]
Brick count reflected correctly on the Grafana dashboard

Created 3 volumes with 6 bricks each
Total of 18 bricks which is correctly shown on the Grafana dashboard.

[root@dhcp42-110 ~]# gluster v info
 
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: 857a7433-b359-4bb4-8a06-7ed43c613279
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick0/b1
Brick2: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick0/b2
Brick3: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick0/b3
Brick4: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick1/b4
Brick5: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick1/b5
Brick6: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick1/b6
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on
 
Volume Name: volume2
Type: Distributed-Replicate
Volume ID: efe05337-b08b-4d13-bac6-f0dd6fbfb458
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick2/b1
Brick2: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick2/b2
Brick3: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick2/b3
Brick4: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick2/b4
Brick5: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick2/b5
Brick6: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick2/b6
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on
 
Volume Name: volume3
Type: Distributed-Replicate
Volume ID: 0139d0c8-8b5f-4748-93d1-319517e50f3b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick3/b1
Brick2: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick3/b2
Brick3: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick3/b3
Brick4: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick4/b4
Brick5: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick4/b5
Brick6: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick4/b6
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on


Moving this bug to verified.

Comment 8 errata-xmlrpc 2017-12-18 04:37:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478