Bug 1484412

Summary: [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Abhishek Kumar <abhishku>
Component: CNS-deploymentAssignee: Raghavendra Talur <rtalur>
Status: CLOSED ERRATA QA Contact: Nitin Goyal <nigoyal>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: abhishku, akhakhar, akrishna, amanzane, amukherj, annair, atoborek, atumball, bkunal, bmekala, bturner, hchiramm, jarrpa, jmulligan, kramdoss, madam, ndevos, nigoyal, pprakash, rcyriac, rhs-bugs, rreddy, rtalur, sankarshan, sarumuga, srakonde, suprasad, vinug, ykaul
Target Milestone: ---   
Target Release: CNS 3.10   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, if any glusterd to glusterd connection reconnected during a volume delete operation it could have led to a stale volume being present in the gluster pool. This stale volume led to the 'brick part of another brick' error during subsequent volume create operations. With this fix, subsequent volume create operations don't fail.
Story Points: ---
Clone Of:
: 1599803 (view as bug list) Environment:
Last Closed: 2018-09-12 12:27:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1599783, 1599803    
Bug Blocks: 1568862    

Description Abhishek Kumar 2017-08-23 13:52:26 UTC
Description of problem:

Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"


Version-Release number of selected component (if applicable):

glusterfs-3.7.9-12.el7rhgs.x86_64

atomic-openshift-3.4.1.2-1.git.0.d760092.el7.x86_64

How reproducible:

Cu Environment


Actual results:

While creating new volume "Brick may be containing or be contained by an existing brick"

Expected results:

For every new volume, new brick has been created, none of gluster related xattrs will be available on these bricks

Additional info:

Comment 34 Sudhir 2018-07-12 04:50:03 UTC
Approved

Comment 45 Bipin Kunal 2018-07-12 10:12:08 UTC
Raising needinfo in Saravana for comment 44 and on Sanju for comment 42

Comment 63 Nitin Goyal 2018-07-30 05:28:53 UTC
Hi,
 
I verified this bug on below given rpms and glusterfs container image (on CNS Environment only). Having tried volume creation and deletion 100 times with continuously running network failure script (script given below) . I was running network failure script on one of the gluster node. I was not able to see any issue. Hence marking this bug as verified.


RPMS ->
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64
glusterfs-server-3.8.4-54.15.el7rhgs.x86_64
gluster-block-0.2.1-22.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64


Container Image ->
rhgs-server-rhel7:3.3.1-27


Script for network failures ->
------------------------------
while true
do
    ifup ens192
    sleep 2
    ifdown ens192
    sleep 5
done
------------------------------

Comment 64 Anjana KD 2018-09-07 09:44:36 UTC
updated the doc text field , kindly review

Comment 65 John Mulligan 2018-09-07 19:14:03 UTC
Doc Text looks OK

Comment 67 errata-xmlrpc 2018-09-12 12:27:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2697

Comment 68 nravinas 2020-03-02 07:44:47 UTC
*** Bug 1807501 has been marked as a duplicate of this bug. ***