Bug 1335437
Summary: | Self heal shows different information for the same volume from each node | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Sahina Bose <sabose> |
Component: | sharding | Assignee: | Pranith Kumar K <pkarampu> |
Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.1 | CC: | asrivast, atalur, bugs, kdhananj, mchangir, pcuzner, pkarampu, ravishankar, rcyriac, rhinduja, sabose |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.3 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.9-5 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 1334566 | Environment: | |
Last Closed: | 2016-06-23 05:23:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1334566 | ||
Bug Blocks: | 1258386, 1311817 |
Description
Sahina Bose
2016-05-12 08:45:11 UTC
Verified and works fine with build glusterfs-3.7.9-6.el7rhgs.x86_64. Brought down one of the brick in the data volume where fio is running and brought it up back after some time so that self heal kicks in. Ran the script "for node in <node1> <node2> <node3>; do pssh -P -t 60 -H $node 'date; gluster vol heal data info ; sleep 1'; done. Verified that undergoing and unsyncedentries of Volume heal info - data in nagios and the script returns the same values. In nagios when Volume heal info -data displaying '0' heal info from all the nodes returns '0' Will reopen the bug if i hit the issue again. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |