Bug 1004650
Summary: | glusterd : 'gluster volume status <vol_name>' is showing 'N/A' under Port column for all volumes. - same result after gluster volume start <vol_name> force | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rachana Patel <racpatel> | |
Component: | glusterd | Assignee: | Avra Sengupta <asengupt> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | amainkar | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | 2.1 | CC: | amukherj, asengupt, dblack, jcastillo, nsathyan, rhs-bugs, sasundar, vagarwal, vbellur | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1175700 (view as bug list) | Environment: | ||
Last Closed: | 2015-04-17 07:11:21 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1175700 |
Description
Rachana Patel
2013-09-05 07:03:52 UTC
I have a customer that had a similar problem, and I managed to reproduce it in the labs. I found out that this may have been solved in bz923164 with the change http://review.gluster.org/#/c/6786/ "glusterd: Fix race in pid file update", so I proceeded to backport that change to a test package based on glusterfs*-3.4.0.59rhs-1.el6rhs in RHS 2.1 and with the test package I created, the problem hasn't appeared any more. The errata from bz923164 is https://rhn.redhat.com/errata/RHEA-2014-1278.html and was released only for RHS 3.0. If more testing proves that the fix resolves the issue in RHS 2.x, would it be possible to backport this fix? This behavior is also seen in scenarios where glusterd is down, and the bricks are up. Following are the steps to reproduce it. 1. "pkill glusterd" on one node. 2. Perform any volume set operation from another node. #gluster volume set test_vol diagnostics.brick-log-level DEBUG 3. Now bring back glusterd on that node. #service glusterd start 4. Check volume status. The port for the brick(s) hosted on that node will show N/A. This happens because while importing new volume information from another node, even though we retain the already running bricks that are still part of the volume, we don't retain their port number. The fix is already into upstream with http://review.gluster.org/9297 BZ should be moved to 'Modified' the moment branching takes place. |