Bug 1831403 - [RHEL-8.2] On distributed-disperse volume, Remove-brick status showing failed on one of the node after few hours
Summary: [RHEL-8.2] On distributed-disperse volume, Remove-brick status showing failed...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: disperse
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Ashish Pandey
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1812789
TreeView+ depends on / blocked
 
Reported: 2020-05-05 05:45 UTC by Bala Konda Reddy M
Modified: 2020-12-17 04:51 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0-40
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 04:51:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:51:41 UTC

Description Bala Konda Reddy M 2020-05-05 05:45:54 UTC
Description of problem:
On remove-brick start, remove-brick status is going to failed state. while IO and rm -rf is going on parallel on different directories of the mountpoint

Based on the below comments raising this.
https://bugzilla.redhat.com/show_bug.cgi?id=1812789#c15
https://bugzilla.redhat.com/show_bug.cgi?id=1812789#c17


Version-Release number of selected component (if applicable):
glusterfs-6.0-33.el8rhgs.x86_64

How reproducible:
2/2

Steps to Reproduce:
1. On a three node cluster, enabled brick-mux.
2. Created two replicated(1X3) volumes and distributed-disperse volume(4 x (4 + 2))
3. Mounted ec-vol on 4 clients and ran linux untar, crefi, lookups from the clients.
4. After data filled is at 600GB, performed remove-brick start
5. As the data is huge, performed rm -rf where the data is not being written
   removed 1-18 directores on 11 clients, where data is being written from 24th    directory

Actual results:
remove-brick status is showing failed


Expected results:
remove-brick status should not be failed

Additional info:

Comment 18 Manisha Saini 2020-11-02 03:10:32 UTC
Verified this BZ with 

# rpm -qa | grep gluster
glusterfs-libs-6.0-46.el7rhgs.x86_64
glusterfs-api-6.0-46.el7rhgs.x86_64
glusterfs-geo-replication-6.0-46.el7rhgs.x86_64
glusterfs-6.0-46.el7rhgs.x86_64
glusterfs-fuse-6.0-46.el7rhgs.x86_64
glusterfs-cli-6.0-46.el7rhgs.x86_64
python2-gluster-6.0-46.el7rhgs.x86_64
glusterfs-client-xlators-6.0-46.el7rhgs.x86_64
glusterfs-server-6.0-46.el7rhgs.x86_64


Steps performed for verification of this BZ

1. On a three node cluster, enabled brick-mux.
2. Created two replicated(1X3) volumes and distributed-disperse volume(4 x (4 + 2))
3. Mounted ec-vol on muliple clients and ran linux untar, crefi, lookups from the clients.
4. After data filled, performed remove-brick start
5. Performed rm -rf where the data is not being written
   
Moving this BZ to verified state

Comment 20 errata-xmlrpc 2020-12-17 04:51:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.