Bug 1484412 - [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"
Summary: [GSS] Error while creating new volume in CNS "Brick may be containing or be c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: CNS-deployment
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: CNS 3.10
Assignee: Raghavendra Talur
QA Contact: Nitin Goyal
URL:
Whiteboard:
: 1807501 (view as bug list)
Depends On: 1599783 1599803
Blocks: 1568862
TreeView+ depends on / blocked
 
Reported: 2017-08-23 13:52 UTC by Abhishek Kumar
Modified: 2023-09-07 18:55 UTC (History)
29 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, if any glusterd to glusterd connection reconnected during a volume delete operation it could have led to a stale volume being present in the gluster pool. This stale volume led to the 'brick part of another brick' error during subsequent volume create operations. With this fix, subsequent volume create operations don't fail.
Clone Of:
: 1599803 (view as bug list)
Environment:
Last Closed: 2018-09-12 12:27:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1599823 0 high CLOSED [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick" 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2018:2697 0 None None None 2018-09-12 12:27:57 UTC

Internal Links: 1599823

Description Abhishek Kumar 2017-08-23 13:52:26 UTC
Description of problem:

Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick"


Version-Release number of selected component (if applicable):

glusterfs-3.7.9-12.el7rhgs.x86_64

atomic-openshift-3.4.1.2-1.git.0.d760092.el7.x86_64

How reproducible:

Cu Environment


Actual results:

While creating new volume "Brick may be containing or be contained by an existing brick"

Expected results:

For every new volume, new brick has been created, none of gluster related xattrs will be available on these bricks

Additional info:

Comment 34 Sudhir 2018-07-12 04:50:03 UTC
Approved

Comment 45 Bipin Kunal 2018-07-12 10:12:08 UTC
Raising needinfo in Saravana for comment 44 and on Sanju for comment 42

Comment 63 Nitin Goyal 2018-07-30 05:28:53 UTC
Hi,
 
I verified this bug on below given rpms and glusterfs container image (on CNS Environment only). Having tried volume creation and deletion 100 times with continuously running network failure script (script given below) . I was running network failure script on one of the gluster node. I was not able to see any issue. Hence marking this bug as verified.


RPMS ->
glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-3.8.4-54.15.el7rhgs.x86_64
glusterfs-api-3.8.4-54.15.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64
glusterfs-server-3.8.4-54.15.el7rhgs.x86_64
gluster-block-0.2.1-22.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64
glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64


Container Image ->
rhgs-server-rhel7:3.3.1-27


Script for network failures ->
------------------------------
while true
do
    ifup ens192
    sleep 2
    ifdown ens192
    sleep 5
done
------------------------------

Comment 64 Anjana KD 2018-09-07 09:44:36 UTC
updated the doc text field , kindly review

Comment 65 John Mulligan 2018-09-07 19:14:03 UTC
Doc Text looks OK

Comment 67 errata-xmlrpc 2018-09-12 12:27:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2697

Comment 68 nravinas 2020-03-02 07:44:47 UTC
*** Bug 1807501 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.