Bug 1514785 - Total brick count information is not shown accurately on the dashboard.
Summary: Total brick count information is not shown accurately on the dashboard.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-gluster-integration
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Nishanth Thomas
QA Contact: Rochelle
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-18 15:10 UTC by Bala Konda Reddy M
Modified: 2017-12-18 04:37 UTC (History)
5 users (show)

Fixed In Version: tendrl-gluster-integration-1.5.4-4.el7rhgs.noarch.rpm
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-18 04:37:04 UTC
Embargoed:


Attachments (Terms of Use)
In the bricks widget it is showing the brick count as 4 (132.52 KB, image/png)
2017-11-18 15:10 UTC, Bala Konda Reddy M
no flags Details
In hosts the brick count is shown as zero, but the bricks in the hosts are up and running (170.94 KB, image/png)
2017-11-18 15:20 UTC, Bala Konda Reddy M
no flags Details
Brick count reflected correctly on the Grafana dashboard (125.20 KB, image/png)
2017-11-22 08:51 UTC, Rochelle
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3478 0 normal SHIPPED_LIVE RHGS Web Administration packages 2017-12-18 09:34:49 UTC

Description Bala Konda Reddy M 2017-11-18 15:10:50 UTC
Created attachment 1354749 [details]
In the bricks widget it is showing the brick count as 4

Description of problem:
Created a gluster cluster and imported successfully to webadmin. Created a replica 2 volume with 6 bricks. On the webadmin dashboard it is showing only 4 bricks. It should show all 6 bricks.

Please find the attachment helpful

Version-Release number of selected component (if applicable):
tendrl-gluster-integration-1.5.4-2

How reproducible:
1:1

Steps to Reproduce:
1. Created a gluster cluster and imported successfully to webadmin.
2. Created a replica 2 volume with 6 bricks across 3 nodes.
3. On the dashboard the brick count is shown as 4, it should show 6 as count

Actual results:
Brick information is not accurate with respect to the number of bricks

Expected results:
Brick information should be accurate and should show exact number of bricks in the volume 


Additional info:

Comment 2 Bala Konda Reddy M 2017-11-18 15:20:10 UTC
Created attachment 1354750 [details]
In hosts the brick count is shown as zero, but the bricks in the hosts are up and running

Comment 4 Petr Penicka 2017-11-20 13:26:35 UTC
Giving pm_ack and 3.3.z+ since both qa_ack and dev_ack are already given.

Comment 6 Rochelle 2017-11-22 08:51:35 UTC
Created attachment 1357297 [details]
Brick count reflected correctly on the Grafana dashboard

Created 3 volumes with 6 bricks each
Total of 18 bricks which is correctly shown on the Grafana dashboard.

[root@dhcp42-110 ~]# gluster v info
 
Volume Name: volume1
Type: Distributed-Replicate
Volume ID: 857a7433-b359-4bb4-8a06-7ed43c613279
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick0/b1
Brick2: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick0/b2
Brick3: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick0/b3
Brick4: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick1/b4
Brick5: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick1/b5
Brick6: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick1/b6
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on
 
Volume Name: volume2
Type: Distributed-Replicate
Volume ID: efe05337-b08b-4d13-bac6-f0dd6fbfb458
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick2/b1
Brick2: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick2/b2
Brick3: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick2/b3
Brick4: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick2/b4
Brick5: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick2/b5
Brick6: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick2/b6
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on
 
Volume Name: volume3
Type: Distributed-Replicate
Volume ID: 0139d0c8-8b5f-4748-93d1-319517e50f3b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick3/b1
Brick2: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick3/b2
Brick3: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick3/b3
Brick4: dhcp42-110.lab.eng.blr.redhat.com:/bricks/brick4/b4
Brick5: dhcp42-136.lab.eng.blr.redhat.com:/bricks/brick4/b5
Brick6: dhcp42-2.lab.eng.blr.redhat.com:/bricks/brick4/b6
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: on


Moving this bug to verified.

Comment 8 errata-xmlrpc 2017-12-18 04:37:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478


Note You need to log in before you can comment on or make changes to this bug.