Bug 1519218 - After performing volume stop,Tendrl web GUI shows mismatch status for few brick in "brick status" layout
Summary: After performing volume stop,Tendrl web GUI shows mismatch status for few bri...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-gluster-integration
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.4.0
Assignee: Nishanth Thomas
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1503134
TreeView+ depends on / blocked
 
Reported: 2017-11-30 12:22 UTC by Manisha Saini
Modified: 2020-03-02 07:20 UTC (History)
4 users (show)

Fixed In Version: tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm,
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 07:00:31 UTC
Embargoed:


Attachments (Terms of Use)
After 30 mins of performing Volume stop Sceenshot2 (151.89 KB, image/png)
2017-11-30 12:25 UTC, Manisha Saini
no flags Details
volume dashboard (131.35 KB, image/png)
2018-01-31 14:55 UTC, Lubos Trilety
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 07:01:24 UTC

Description Manisha Saini 2017-11-30 12:22:13 UTC
Description of problem:

After performing volume stop,Tendrl web GUI shows mismatch status for few brick in "brick status" layout

Here before performing volume stop,add brick,remove brick,kill-brick,replace brick scenarios were performed.

Version-Release number of selected component (if applicable):

# rpm -qa | grep tendrl
tendrl-collectd-selinux-1.5.4-1.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-6.el7rhgs.noarch
tendrl-node-agent-1.5.4-8.el7rhgs.noarch
tendrl-commons-1.5.4-5.el7rhgs.noarch
tendrl-selinux-1.5.4-1.el7rhgs.noarch

How reproducible:


Steps to Reproduce:

1.Create a volumes 4*3 Distributed-Replicate volume
2.Performed some remove-brick,add brick,kill brick followed by replace brick scenarios.Awaited for GUI to reflect correct status after these operations
3.Perform Volume stop.

Actual results:

After performing volume stop,Tendrl web admin shows mismatch brick status.For few brick it shows status "10" in red color while for other few bricks it shows "-".All peer node in cluster are up and connected.

# gluster v status ManiVol
Volume ManiVol is not started


Expected results:

It should show even status for all the bricks when volume is stopped.


Additional info:


# gluster v info ManiVol
 
Volume Name: ManiVol
Type: Distributed-Replicate
Volume ID: d71b616d-207b-4db4-b78d-42ce943f5425
Status: Stopped
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: dhcp42-119.lab.eng.blr.redhat.com:/gluster/brick7/ms1
Brick2: dhcp42-127.lab.eng.blr.redhat.com:/gluster/brick7/ms1
Brick3: dhcp42-129.lab.eng.blr.redhat.com:/gluster/brick7/msn1
Brick4: dhcp42-129.lab.eng.blr.redhat.com:/gluster/brick8/ms2
Brick5: dhcp42-119.lab.eng.blr.redhat.com:/gluster/brick8/ms2
Brick6: dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick7/msn1
Brick7: dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick8/ms2
Brick8: dhcp42-129.lab.eng.blr.redhat.com:/gluster/brick10/new
Brick9: dhcp42-127.lab.eng.blr.redhat.com:/gluster/brick8/msn2
Brick10: dhcp42-127.lab.eng.blr.redhat.com:/gluster/brick9/ms3
Brick11: dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick9/ms3
Brick12: dhcp42-119.lab.eng.blr.redhat.com:/gluster/brick9/msn3
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: off
nfs-ganesha: disable
cluster.enable-shared-storage: disable

Comment 3 Manisha Saini 2017-11-30 12:25:46 UTC
Created attachment 1360896 [details]
After 30 mins of  performing Volume stop Sceenshot2

Comment 4 Nishanth Thomas 2017-12-04 07:33:57 UTC
@Manisha, The screenshot attached to this bug(https://bugzilla.redhat.com/attachment.cgi?id=1360896) looks perfectly alight to me. Volume and bricks are marked as down. 
Can you please confirm?

Comment 6 Lubos Trilety 2018-01-31 14:55:56 UTC
Created attachment 1389022 [details]
volume dashboard

Comment 7 Lubos Trilety 2018-01-31 14:56:49 UTC
Checked with:
tendrl-commons-1.5.4-9.el7rhgs.noarch
tendrl-api-1.5.4-4.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-14.el7rhgs.noarch
tendrl-ansible-1.5.4-7.el7rhgs.noarch
tendrl-node-agent-1.5.4-16.el7rhgs.noarch
tendrl-ui-1.5.4-6.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-14.el7rhgs.noarch
tendrl-notifier-1.5.4-6.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-4.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-14.el7rhgs.noarch
tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch

I have removed, add, kill and replace some bricks. Then I stopped the volume. There were no bricks with other status than 10 on volume dashboard. See attachment 1389022 [details]

Comment 8 Filip Balák 2018-05-14 15:55:17 UTC
With given reproducer: half of the bricks changed status in grafana from `0` to `8` after one minute. The other half changed the status from `0` to `8` after another one minute. There was no status `-`. All bricks are marked correctly as down.
All bricks are correctly marked as up when the volume starts.
--> VERIFIED

Tested with:
tendrl-ansible-1.6.3-3.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-4.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch
tendrl-node-agent-1.6.3-4.el7rhgs.noarch
tendrl-notifier-1.6.3-2.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-1.el7rhgs.noarch
glusterfs-3.12.2-9.el7rhgs.x86_64

Comment 10 errata-xmlrpc 2018-09-04 07:00:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.