Bug 1299432
Summary: | Glusterd: Creation of volume is failing if one of the brick is down on the server | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | RajeshReddy <rmekala> | |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> | |
Status: | CLOSED ERRATA | QA Contact: | Byreddy <bsrirama> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | asrivast, bsrirama, mzywusko, rhinduja, rhs-bugs, rmekala, sasundar, smohan, storage-qa-internal, vbellur | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.3 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.9-1 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1299710 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-23 05:02:49 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1299184, 1299710, 1312878 |
Description
RajeshReddy
2016-01-18 11:22:40 UTC
Hi Rajesh, Can you make sure that the brick is already used by another volume means .glusterfs directory is not there while creating new volume.? As Gaurav mentioned in #c2 iIt seems like you have tried to reuse a brick which is or was earlier used for other gluster volume, that's exactly the error message says. I strongly believe this is not a bug. Please confirm. After going through the code, it looks like a bug. If realpath () call fails with an EIO (which indicates the underlying file system of existing bricks may have some problem) then we return the path is not available instead of skipping the same brick path Upstream patch http://review.gluster.org/13258 is posted for review Development team is able to re-create the problem The fix is now available in rhgs-3.1.3 branch, hence moving the state to Modified. Verified this bug using the build "glusterfs-3.7.9-1" Steps followed: =============== 1. Created 1*2 volume using one node cluster and started it. 2. crashed underlying xfs for one of volume brick using "godown" tool 3. created the new volume using bricks not part of volume created in step-1, able to create new volume successfully. With this Fix, reported issue is working fine. Moving to verified state. Note: Issues found around this fix will be tracked in different bugs. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |