Bug 1118736

Summary: [Nagios] Improve disk utilization status information
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Shruti Sampat <ssampat>
Component: gluster-nagios-addonsAssignee: Sahina Bose <sabose>
Status: CLOSED CANTFIX QA Contact: RHS-C QE <rhsc-qe-bugs>
Severity: low Docs Contact:
Priority: low    
Version: rhgs-3.0CC: rhsc-qe-bugs, sankarshan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-01-30 11:48:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Shruti Sampat 2014-07-11 12:36:12 UTC
Description of problem:
-------------------------

Status information of the disk utilization service is as follows when usage of one of the disks has crossed the warning threshold.

WARNING : /rhs/brick4
:mount(s): (/rhs/brick4; OK: /boot, /, /rhs/brick1, /rhs/brick2, /rhs/brick3, /rhs/brick5, /rhs/brick6, /rhs/brick7, /rhs/brick8)

The path /rhs/brick4 needs to be prefixed by the string "WARNING" in the list of mounts. See below for an example of status information when one of the disks is critical -

CRITICAL : /rhs/brick4
:mount(s): (CRITICAL: /rhs/brick4; OK: /boot, /, /rhs/brick1, /rhs/brick2, /rhs/brick3, /rhs/brick5, /rhs/brick6, /rhs/brick7, /rhs/brick8)

Version-Release number of selected component (if applicable):
gluster-nagios-addons-0.1.9-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. Increase the utilization of a disk so it crosses the warning threshold.
2. Check the status information of the disk utilization service for that node.

Actual results:
The status information is shown as described above.

Expected results:
Status information needs to be changed as above.

Additional info:

Comment 5 Sahina Bose 2018-01-30 11:48:02 UTC
Thank you for your bug report. However, this bug is being closed as it's on a component where no further new development is being undertaken.