Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1519218 - After performing volume stop,Tendrl web GUI shows mismatch status for few brick in "brick status" layout [NEEDINFO]
After performing volume stop,Tendrl web GUI shows mismatch status for few bri...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-gluster-integration (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity high
: ---
: RHGS 3.4.0
Assigned To: Nishanth Thomas
Filip Balák
:
Depends On:
Blocks: 1503134
  Show dependency treegraph
 
Reported: 2017-11-30 07:22 EST by Manisha Saini
Modified: 2018-09-04 03:01 EDT (History)
3 users (show)

See Also:
Fixed In Version: tendrl-gluster-integration-1.6.1-1.el7rhgs, tendrl-api-1.6.1-1.el7rhgs.noarch.rpm, tendrl-commons-1.6.1-1.el7rhgs.noarch.rpm, tendrl-monitoring-integration-1.6.1-1.el7rhgs.noarch.rpm, tendrl-node-agent-1.6.1-1.el7, tendrl-ui-1.6.1-1.el7rhgs.noarch.rpm,
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-04 03:00:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
nthomas: needinfo? (msaini)


Attachments (Terms of Use)
After 30 mins of performing Volume stop Sceenshot2 (151.89 KB, image/png)
2017-11-30 07:25 EST, Manisha Saini
no flags Details
volume dashboard (131.35 KB, image/png)
2018-01-31 09:55 EST, Lubos Trilety
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2616 None None None 2018-09-04 03:01 EDT

  None (edit)
Description Manisha Saini 2017-11-30 07:22:13 EST
Description of problem:

After performing volume stop,Tendrl web GUI shows mismatch status for few brick in "brick status" layout

Here before performing volume stop,add brick,remove brick,kill-brick,replace brick scenarios were performed.

Version-Release number of selected component (if applicable):

# rpm -qa | grep tendrl
tendrl-collectd-selinux-1.5.4-1.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-6.el7rhgs.noarch
tendrl-node-agent-1.5.4-8.el7rhgs.noarch
tendrl-commons-1.5.4-5.el7rhgs.noarch
tendrl-selinux-1.5.4-1.el7rhgs.noarch

How reproducible:


Steps to Reproduce:

1.Create a volumes 4*3 Distributed-Replicate volume
2.Performed some remove-brick,add brick,kill brick followed by replace brick scenarios.Awaited for GUI to reflect correct status after these operations
3.Perform Volume stop.

Actual results:

After performing volume stop,Tendrl web admin shows mismatch brick status.For few brick it shows status "10" in red color while for other few bricks it shows "-".All peer node in cluster are up and connected.

# gluster v status ManiVol
Volume ManiVol is not started


Expected results:

It should show even status for all the bricks when volume is stopped.


Additional info:


# gluster v info ManiVol
 
Volume Name: ManiVol
Type: Distributed-Replicate
Volume ID: d71b616d-207b-4db4-b78d-42ce943f5425
Status: Stopped
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: dhcp42-119.lab.eng.blr.redhat.com:/gluster/brick7/ms1
Brick2: dhcp42-127.lab.eng.blr.redhat.com:/gluster/brick7/ms1
Brick3: dhcp42-129.lab.eng.blr.redhat.com:/gluster/brick7/msn1
Brick4: dhcp42-129.lab.eng.blr.redhat.com:/gluster/brick8/ms2
Brick5: dhcp42-119.lab.eng.blr.redhat.com:/gluster/brick8/ms2
Brick6: dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick7/msn1
Brick7: dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick8/ms2
Brick8: dhcp42-129.lab.eng.blr.redhat.com:/gluster/brick10/new
Brick9: dhcp42-127.lab.eng.blr.redhat.com:/gluster/brick8/msn2
Brick10: dhcp42-127.lab.eng.blr.redhat.com:/gluster/brick9/ms3
Brick11: dhcp42-125.lab.eng.blr.redhat.com:/gluster/brick9/ms3
Brick12: dhcp42-119.lab.eng.blr.redhat.com:/gluster/brick9/msn3
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
transport.address-family: inet
nfs.disable: off
nfs-ganesha: disable
cluster.enable-shared-storage: disable
Comment 3 Manisha Saini 2017-11-30 07:25 EST
Created attachment 1360896 [details]
After 30 mins of  performing Volume stop Sceenshot2
Comment 4 Nishanth Thomas 2017-12-04 02:33:57 EST
@Manisha, The screenshot attached to this bug(https://bugzilla.redhat.com/attachment.cgi?id=1360896) looks perfectly alight to me. Volume and bricks are marked as down. 
Can you please confirm?
Comment 6 Lubos Trilety 2018-01-31 09:55 EST
Created attachment 1389022 [details]
volume dashboard
Comment 7 Lubos Trilety 2018-01-31 09:56:49 EST
Checked with:
tendrl-commons-1.5.4-9.el7rhgs.noarch
tendrl-api-1.5.4-4.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-14.el7rhgs.noarch
tendrl-ansible-1.5.4-7.el7rhgs.noarch
tendrl-node-agent-1.5.4-16.el7rhgs.noarch
tendrl-ui-1.5.4-6.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-14.el7rhgs.noarch
tendrl-notifier-1.5.4-6.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-4.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-14.el7rhgs.noarch
tendrl-collectd-selinux-1.5.4-2.el7rhgs.noarch

I have removed, add, kill and replace some bricks. Then I stopped the volume. There were no bricks with other status than 10 on volume dashboard. See attachment 1389022 [details]
Comment 8 Filip Balák 2018-05-14 11:55:17 EDT
With given reproducer: half of the bricks changed status in grafana from `0` to `8` after one minute. The other half changed the status from `0` to `8` after another one minute. There was no status `-`. All bricks are marked correctly as down.
All bricks are correctly marked as up when the volume starts.
--> VERIFIED

Tested with:
tendrl-ansible-1.6.3-3.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch
tendrl-commons-1.6.3-4.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-gluster-integration-1.6.3-2.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-2.el7rhgs.noarch
tendrl-node-agent-1.6.3-4.el7rhgs.noarch
tendrl-notifier-1.6.3-2.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.6.3-1.el7rhgs.noarch
glusterfs-3.12.2-9.el7rhgs.x86_64
Comment 10 errata-xmlrpc 2018-09-04 03:00:31 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616

Note You need to log in before you can comment on or make changes to this bug.