Bug 1328317 - RFE: Volume self heal info status expected to be shown for all valid volumes
Summary: RFE: Volume self heal info status expected to be shown for all valid volumes
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nagios-server-addons
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Sahina Bose
QA Contact: Sweta Anandpara
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-19 05:05 UTC by Sweta Anandpara
Modified: 2018-01-30 07:55 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-30 07:55:09 UTC
Target Upstream Version:


Attachments (Terms of Use)
screenshot of nagios UI (367.23 KB, image/png)
2016-04-19 05:05 UTC, Sweta Anandpara
no flags Details

Description Sweta Anandpara 2016-04-19 05:05:12 UTC
Created attachment 1148315 [details]
screenshot of nagios UI

Description of problem:

With BZ 1312207, we introduced a new self heal monitoring plugin in nagios. The need for the same came from one of the ROBO use cases. Currently we are able to see the heal-info status only for 'replicate/distribute-replicate' volume types. There is no mention of volume of type 'disperse', if present. 

This presents an inconsistent view to the user, leaving him/her to wonder why the absence of heal-info status for other volume(s). It would be good to present a uniform view to the user and show the status of self-heal for all volumes, for which it is valid. 


Version-Release number of selected component (if applicable):
3.1.3

How reproducible: Always


Steps to Reproduce:
1. Have a cluster with 3.1.3 and nagios-server-addons 0.2.4-1
2. Create distribute, replicate, disperse, distribute-replicate, and replica3 volume
3. Enable monitoring, and login to the nagios web UI
4. Verify that 'Volume Self Heal Info' is shown for all volumes, for which self-heal does take place.

Actual results:
Voluem self heal info is shown only for replicate, distribute-repicate, replica3 volumes
It is not shown for disperse volume


Expected results: volume self heal info status should be shown for disperse volume as well. 


Additional info:

[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# rpm -qa | grep gluster
glusterfs-api-3.7.9-1.el7rhgs.x86_64
glusterfs-libs-3.7.9-1.el7rhgs.x86_64
glusterfs-api-devel-3.7.9-1.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch
glusterfs-3.7.9-1.el7rhgs.x86_64
glusterfs-cli-3.7.9-1.el7rhgs.x86_64
glusterfs-geo-replication-3.7.9-1.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
glusterfs-client-xlators-3.7.9-1.el7rhgs.x86_64
glusterfs-server-3.7.9-1.el7rhgs.x86_64
glusterfs-rdma-3.7.9-1.el7rhgs.x86_64
glusterfs-devel-3.7.9-1.el7rhgs.x86_64
gluster-nagios-addons-0.2.6-1.el7rhgs.x86_64
glusterfs-fuse-3.7.9-1.el7rhgs.x86_64
[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# gluster v info
 
Volume Name: dist
Type: Distribute
Volume ID: f1b64419-34f7-4143-bbf7-9c6cf782b42b
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.46.187:/rhs/brick4/disk
Brick2: 10.70.46.193:/rhs/brick4/dist
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: nash
Type: Distributed-Replicate
Volume ID: 86241d2a-68a9-4547-a105-99282922aea2
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.188:/rhs/brick3/nash
Brick2: 10.70.46.193:/rhs/brick3/nash
Brick3: 10.70.46.187:/rhs/brick3/nash
Brick4: 10.70.47.188:/rhs/brick4/nash
Brick5: 10.70.46.193:/rhs/brick4/nash
Brick6: 10.70.46.187:/rhs/brick4/nash
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
user.smb: enable
performance.readdir-ahead: on
cluster.server-quorum-type: server
 
Volume Name: ozone
Type: Disperse
Volume ID: fe09a1a1-13e2-44b5-86a1-de851f634d97
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.187:/rhs/brick3/ozone
Brick2: 10.70.46.193:/rhs/brick3/ozone
Brick3: 10.70.47.188:/rhs/brick3/ozone
Brick4: 10.70.46.215:/rhs/brick3/ozone
Brick5: 10.70.46.187:/rhs/brick4/ozone
Brick6: 10.70.46.193:/rhs/brick4/ozone
Options Reconfigured:
network.inode-lru-limit: 50
features.scrub-throttle: normal
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on
[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# 
[root@dhcp47-188 ~]# gluster peer status
Number of Peers: 3

Hostname: 10.70.46.193
Uuid: f8e7ae42-da4f-4691-85b6-96a03aebd511
State: Peer in Cluster (Connected)

Hostname: 10.70.46.187
Uuid: 3e437522-f4f8-4bb5-9261-6a104cb60a45
State: Peer in Cluster (Connected)

Hostname: 10.70.46.215
Uuid: 763002c8-ecf8-4f13-9107-2e3410e10f0c
State: Peer in Cluster (Connected)
[root@dhcp47-188 ~]#

Comment 2 Sahina Bose 2018-01-30 07:55:09 UTC
Thank you for the bug report. However, closing this as the bug is filed against gluster nagios monitoring for which no further new development is being undertaken.


Note You need to log in before you can comment on or make changes to this bug.