Description of problem: On remove-brick start, remove-brick status is going to failed state. while IO and rm -rf is going on parallel on different directories of the mountpoint Based on the below comments raising this. https://bugzilla.redhat.com/show_bug.cgi?id=1812789#c15 https://bugzilla.redhat.com/show_bug.cgi?id=1812789#c17 Version-Release number of selected component (if applicable): glusterfs-6.0-33.el8rhgs.x86_64 How reproducible: 2/2 Steps to Reproduce: 1. On a three node cluster, enabled brick-mux. 2. Created two replicated(1X3) volumes and distributed-disperse volume(4 x (4 + 2)) 3. Mounted ec-vol on 4 clients and ran linux untar, crefi, lookups from the clients. 4. After data filled is at 600GB, performed remove-brick start 5. As the data is huge, performed rm -rf where the data is not being written removed 1-18 directores on 11 clients, where data is being written from 24th directory Actual results: remove-brick status is showing failed Expected results: remove-brick status should not be failed Additional info:
Verified this BZ with # rpm -qa | grep gluster glusterfs-libs-6.0-46.el7rhgs.x86_64 glusterfs-api-6.0-46.el7rhgs.x86_64 glusterfs-geo-replication-6.0-46.el7rhgs.x86_64 glusterfs-6.0-46.el7rhgs.x86_64 glusterfs-fuse-6.0-46.el7rhgs.x86_64 glusterfs-cli-6.0-46.el7rhgs.x86_64 python2-gluster-6.0-46.el7rhgs.x86_64 glusterfs-client-xlators-6.0-46.el7rhgs.x86_64 glusterfs-server-6.0-46.el7rhgs.x86_64 Steps performed for verification of this BZ 1. On a three node cluster, enabled brick-mux. 2. Created two replicated(1X3) volumes and distributed-disperse volume(4 x (4 + 2)) 3. Mounted ec-vol on muliple clients and ran linux untar, crefi, lookups from the clients. 4. After data filled, performed remove-brick start 5. Performed rm -rf where the data is not being written Moving this BZ to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603