*** Bug 1947308 has been marked as a duplicate of this bug. ***
Verified with RHGS 3.5.4 interim build ( glusterfs-6.0-59.el8rhgs ) with the steps from comment0 The steps created a replica 3 volume, fuse mounted that volume, then created few sparse file, then added 3 more bricks and triggered rebalance. Rebalance operation was successful, as queried from gluster CLI rebalance status command, and there are no 'ERROR' messages seen either in fuse mount logs or in brick logs. Repeated the test with sharding turned on and there are no issues seen.
(In reply to SATHEESARAN from comment #12) > Verified with RHGS 3.5.4 interim build ( glusterfs-6.0-59.el8rhgs ) with the > steps from comment0 > > The steps created a replica 3 volume, fuse mounted that volume, then created > few sparse file, > then added 3 more bricks and triggered rebalance. > Rebalance operation was successful, as queried from gluster CLI rebalance > status command, > and there are no 'ERROR' messages seen either in fuse mount logs or in brick > logs. > > Repeated the test with sharding turned on and there are no issues seen. Also observed that there are no ERROR messages logged in /var/log/glusterfs/*rebalance.log as well.
Hello Sas, Can you please update the qe_test_coverage for the bug? Regards, Vivek Das
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHGS 3.5.z Batch Update 5 glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3729
(In reply to Vivek Das from comment #14) > Hello Sas, > > Can you please update the qe_test_coverage for the bug? > > Regards, > Vivek Das Thanks Vivek. Polarion test case[1] is available for this scenario [1] - https://polarion.engineering.redhat.com/polarion/#/project/RedHatHyperConvergedInfraStructure/workitem?id=RHHI-1263