Bug 1957641 - Rebalance doesn't migrate some sparse files
Summary: Rebalance doesn't migrate some sparse files
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: RHGS 3.5.z Batch Update 7
Assignee: Xavi Hernandez
QA Contact: SATHEESARAN
URL:
Whiteboard:
: 1947308 (view as bug list)
Depends On:
Blocks: 1936177
TreeView+ depends on / blocked
 
Reported: 2021-05-06 08:16 UTC by Xavi Hernandez
Modified: 2024-06-14 01:28 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-57
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-05 07:56:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:3729 0 None None None 2021-10-05 07:56:38 UTC

Comment 2 nravinas 2021-05-06 09:07:25 UTC
*** Bug 1947308 has been marked as a duplicate of this bug. ***

Comment 12 SATHEESARAN 2021-07-22 10:49:01 UTC
Verified with RHGS 3.5.4 interim build ( glusterfs-6.0-59.el8rhgs ) with the steps from comment0

The steps created a replica 3 volume, fuse mounted that volume, then created few sparse file,
then added 3 more bricks and triggered rebalance.
Rebalance operation was successful, as queried from gluster CLI rebalance status command,
and there are no 'ERROR' messages seen either in fuse mount logs or in brick logs.

Repeated the test with sharding turned on and there are no issues seen.

Comment 13 SATHEESARAN 2021-07-22 10:50:47 UTC
(In reply to SATHEESARAN from comment #12)
> Verified with RHGS 3.5.4 interim build ( glusterfs-6.0-59.el8rhgs ) with the
> steps from comment0
> 
> The steps created a replica 3 volume, fuse mounted that volume, then created
> few sparse file,
> then added 3 more bricks and triggered rebalance.
> Rebalance operation was successful, as queried from gluster CLI rebalance
> status command,
> and there are no 'ERROR' messages seen either in fuse mount logs or in brick
> logs.
> 
> Repeated the test with sharding turned on and there are no issues seen.

Also observed that there are no ERROR messages logged in /var/log/glusterfs/*rebalance.log as well.

Comment 14 Vivek Das 2021-08-11 09:19:43 UTC
Hello Sas,

Can you please update the qe_test_coverage for the bug?

Regards,
Vivek Das

Comment 16 errata-xmlrpc 2021-10-05 07:56:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHGS 3.5.z Batch Update 5 glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3729

Comment 18 SATHEESARAN 2021-10-12 05:25:14 UTC
(In reply to Vivek Das from comment #14)
> Hello Sas,
> 
> Can you please update the qe_test_coverage for the bug?
> 
> Regards,
> Vivek Das

Thanks Vivek.
Polarion test case[1] is available for this scenario

[1] - https://polarion.engineering.redhat.com/polarion/#/project/RedHatHyperConvergedInfraStructure/workitem?id=RHHI-1263


Note You need to log in before you can comment on or make changes to this bug.