Bug 1105568
| Summary: | [Nagios] - When glusterd is stopped, all the process displays incorrect status information. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | RamaKasturi <knarra> |
| Component: | gluster-nagios-addons | Assignee: | Ramesh N <rnachimu> |
| Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
| Severity: | low | Docs Contact: | |
| Priority: | medium | ||
| Version: | rhgs-3.0 | CC: | asrivast, dpati, kmayilsa, psriniva, rnachimu, sharne, tjeyasin |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 3.0.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | gluster-nagios-addons-0.1.11-1.el6rhs | Doc Type: | Bug Fix |
| Doc Text: |
Previously, the status message for CTDB, NFS, Quota, SMB, and Self Heal services were not clearly defined in the Nagios Remote Plugin Executor. With this fix, the plugin for these services return the correct error message and when the glusterd service is offline, clear values are displayed for Status and Status Information fields.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-01-15 13:47:43 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1162518 | ||
| Bug Blocks: | 1087818 | ||
|
Description
RamaKasturi
2014-06-06 12:26:12 UTC
NOT A BLOCKER -- Should be planned for ASYNC Errata release. Sent a patch for internal review
diff --git a/nagios-plugins.spec b/nagios-plugins.spec
index cd56e47..0e6b3cd 100644
--- a/nagios-plugins.spec
+++ b/nagios-plugins.spec
@@ -1,6 +1,6 @@
Name: nagios-plugins
Version: 1.4.16
-Release: 7%{?dist}
+Release: 8%{?dist}
Summary: Host/service/network monitoring program plugins for Nagios
Group: Applications/System
@@ -66,8 +66,6 @@ This plugin does not actually check anything, simply provide it with a flag
Summary: Nagios Plugin - check_ide_smart
Group: Applications/System
Requires: nagios-plugins = %{version}-%{release}
-Requires: group(nagios)
-Requires(pre): group(nagios)
%description ide_smart
Provides check_ide_smart support for Nagios.
@@ -221,7 +219,7 @@ rm -rf %{buildroot}
%{_libdir}/nagios/plugins/check_dummy
%files ide_smart
-%defattr(4750,root,nagios,-)
+%defattr(-,root,root,-)
%{_libdir}/nagios/plugins/check_ide_smart
Sorry i posted in the wrong place Please add doc text for this known issue. Patch sent for upstream : http://review.gluster.org/#/c/8125/. Status information will be, "UNKNOWN: CTDB status could not be determined " for CTDB "UNKNOWN: SMB status could not be determined" for SMB "UNKNOWN: NFS status could not be determined" for NFS "UNKNOWN: QUOTA status could not be determined" for QUOTA "UNKNOWN: SHD status could not be determined" for SHD Please review and signoff the edited doc text. Doc text looks good. Status information would be "UNKNOWN: Status could not be determined as glusterd is not running" Please review and sign-off edited doc text, Doc text looks good except the fact that glusterd is refereed as "glusterFS Management service". From kanagaraj, i understand that these bugs have been moved to on_qa by errata. Since QE has not received the build yet, i am moving this bug back to assigned state. Please move it on to on_qa once builds are attached to errata. Will verify this bug when https://bugzilla.redhat.com/show_bug.cgi?id=1162518 is resolved. Verified and works fine with build nagios-server-addons-0.1.8-1.el6rhs.noarch. When glusterd is down, status and status information for the below services would be 'UNKNOWN' and 'UNKNOWN: Status could not be determined as glusterd is not running" SMB, NFS, CTDB, Quota, Self-Heal, and brick status of all the bricks present in the node where glusterd went down. Hi Ramesh, Can you please review the edited doc text for technical accuracy and sign off? Doc text looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0039.html |