Bug 1957641

Summary: Rebalance doesn't migrate some sparse files
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Xavi Hernandez <jahernan>
Component: distributeAssignee: Xavi Hernandez <jahernan>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: urgent Docs Contact:
Priority: urgent    
Version: rhgs-3.5CC: nravinas, pasik, pprakash, rhs-bugs, sajmoham, sasundar, sheggodu, vdas
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.5.z Batch Update 7   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-57 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-10-05 07:56:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1936177    

Comment 2 nravinas 2021-05-06 09:07:25 UTC
*** Bug 1947308 has been marked as a duplicate of this bug. ***

Comment 12 SATHEESARAN 2021-07-22 10:49:01 UTC
Verified with RHGS 3.5.4 interim build ( glusterfs-6.0-59.el8rhgs ) with the steps from comment0

The steps created a replica 3 volume, fuse mounted that volume, then created few sparse file,
then added 3 more bricks and triggered rebalance.
Rebalance operation was successful, as queried from gluster CLI rebalance status command,
and there are no 'ERROR' messages seen either in fuse mount logs or in brick logs.

Repeated the test with sharding turned on and there are no issues seen.

Comment 13 SATHEESARAN 2021-07-22 10:50:47 UTC
(In reply to SATHEESARAN from comment #12)
> Verified with RHGS 3.5.4 interim build ( glusterfs-6.0-59.el8rhgs ) with the
> steps from comment0
> 
> The steps created a replica 3 volume, fuse mounted that volume, then created
> few sparse file,
> then added 3 more bricks and triggered rebalance.
> Rebalance operation was successful, as queried from gluster CLI rebalance
> status command,
> and there are no 'ERROR' messages seen either in fuse mount logs or in brick
> logs.
> 
> Repeated the test with sharding turned on and there are no issues seen.

Also observed that there are no ERROR messages logged in /var/log/glusterfs/*rebalance.log as well.

Comment 14 Vivek Das 2021-08-11 09:19:43 UTC
Hello Sas,

Can you please update the qe_test_coverage for the bug?

Regards,
Vivek Das

Comment 16 errata-xmlrpc 2021-10-05 07:56:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHGS 3.5.z Batch Update 5 glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3729

Comment 18 SATHEESARAN 2021-10-12 05:25:14 UTC
(In reply to Vivek Das from comment #14)
> Hello Sas,
> 
> Can you please update the qe_test_coverage for the bug?
> 
> Regards,
> Vivek Das

Thanks Vivek.
Polarion test case[1] is available for this scenario

[1] - https://polarion.engineering.redhat.com/polarion/#/project/RedHatHyperConvergedInfraStructure/workitem?id=RHHI-1263