Bug 1484412
Summary: | [GSS] Error while creating new volume in CNS "Brick may be containing or be contained by an existing brick" | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Abhishek Kumar <abhishku> | |
Component: | CNS-deployment | Assignee: | Raghavendra Talur <rtalur> | |
Status: | CLOSED ERRATA | QA Contact: | Nitin Goyal <nigoyal> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.1 | CC: | abhishku, akhakhar, akrishna, amanzane, amukherj, annair, atoborek, atumball, bkunal, bmekala, bturner, hchiramm, jarrpa, jmulligan, kramdoss, madam, ndevos, nigoyal, pprakash, rcyriac, rhs-bugs, rreddy, rtalur, sankarshan, sarumuga, srakonde, suprasad, vinug, ykaul | |
Target Milestone: | --- | |||
Target Release: | CNS 3.10 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
Previously, if any glusterd to glusterd connection reconnected during a volume delete operation it could have led to a stale volume being present in the gluster pool. This stale volume led to the 'brick part of another brick' error during subsequent volume create operations. With this fix, subsequent volume create operations don't fail.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1599803 (view as bug list) | Environment: | ||
Last Closed: | 2018-09-12 12:27:13 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1599783, 1599803 | |||
Bug Blocks: | 1568862 |
Description
Abhishek Kumar
2017-08-23 13:52:26 UTC
Approved Raising needinfo in Saravana for comment 44 and on Sanju for comment 42 Hi, I verified this bug on below given rpms and glusterfs container image (on CNS Environment only). Having tried volume creation and deletion 100 times with continuously running network failure script (script given below) . I was running network failure script on one of the gluster node. I was not able to see any issue. Hence marking this bug as verified. RPMS -> glusterfs-libs-3.8.4-54.15.el7rhgs.x86_64 glusterfs-3.8.4-54.15.el7rhgs.x86_64 glusterfs-api-3.8.4-54.15.el7rhgs.x86_64 glusterfs-cli-3.8.4-54.15.el7rhgs.x86_64 glusterfs-server-3.8.4-54.15.el7rhgs.x86_64 gluster-block-0.2.1-22.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-54.15.el7rhgs.x86_64 glusterfs-fuse-3.8.4-54.15.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-54.15.el7rhgs.x86_64 Container Image -> rhgs-server-rhel7:3.3.1-27 Script for network failures -> ------------------------------ while true do ifup ens192 sleep 2 ifdown ens192 sleep 5 done ------------------------------ updated the doc text field , kindly review Doc Text looks OK Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2697 *** Bug 1807501 has been marked as a duplicate of this bug. *** |