Bug 1638333

Summary: File size was not truncated for all files when tried with rebalance in progress.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Sayalee <saraut>
Component: distributeAssignee: Tamar Shacked <tshacked>
Status: CLOSED UPSTREAM QA Contact: Pranav Prakash <prprakas>
Severity: medium Docs Contact:
Priority: high    
Version: rhgs-3.4CC: aspandey, madam, moagrawa, nladha, prprakas, rhs-bugs, saraut, sasundar, seamurph, sheggodu, tshacked, vdas
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1745967 (view as bug list) Environment:
Last Closed: 2022-06-23 18:22:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1745967    

Description Sayalee 2018-10-11 11:00:10 UTC
Description of problem:
After triggering the rebalance process, simultaneously, truncate command was passed on the mount point, rebalance completed successfully and truncate command did not throw any error, but on doing "# ll ." it was noticed that file size of many files was not truncated to zero as per the truncate command.

Version-Release number of selected component (if applicable):
3.12.2-22

How reproducible:
2/2

Steps to Reproduce:
1. Create a distributed-replicated volume (e.g. 3*3)

2. Start and mount the volume on client node.

3. Add brick to the volume using
# gluster v add-brick volname replica 3 brick10 brick11 brick12

4. From the client node create files on the mount point
e.g.
# for i in {1..8000}; do dd if=/dev/urandom of=file_$i bs=1M count=1; done

5. Trigger rebalance.

6. While rebalance is still in progress, start truncating the files from the mount point
e.g.
# for i in {1..8000}; do truncate -s 0 file_$i; done

7. Wait for the migration to complete.
	
8. Now from the mount point check the size of all the files.


Actual results:
File size for many files was not truncated to zero.

Expected results:
All the files should have size zero.

Additional info:
Sos reports and gluster-health-check reports will be shared.

Comment 10 SATHEESARAN 2021-05-04 09:01:52 UTC
As the severity associated with this bug is medium and there are no customer issues reported with this bug.
Again the last time the issue was reported on 3.5.2. Based on the facts, proposing this bug for later releases.
Meanwhile QE will try running the test with RHGS 3.5.4 and update the bug with relevant information.

Considering the above, proposing to drop this bug from 3.5.5

@Sunil, Please share your thoughts

Comment 11 Sunil Kumar Acharya 2021-05-04 14:52:10 UTC
(In reply to SATHEESARAN from comment #10)
> As the severity associated with this bug is medium and there are no customer
> issues reported with this bug.
> Again the last time the issue was reported on 3.5.2. Based on the facts,
> proposing this bug for later releases.
> Meanwhile QE will try running the test with RHGS 3.5.4 and update the bug
> with relevant information.
> 
> Considering the above, proposing to drop this bug from 3.5.5
> 
> @Sunil, Please share your thoughts

This is a long pending BZ to fix the behaviour, Do we have any plans to pick this up in future batch updates? if not, can we close this BZ?

Comment 12 SATHEESARAN 2021-05-04 15:27:31 UTC
(In reply to Sunil Kumar Acharya from comment #11)
> (In reply to SATHEESARAN from comment #10)
> > As the severity associated with this bug is medium and there are no customer
> > issues reported with this bug.
> > Again the last time the issue was reported on 3.5.2. Based on the facts,
> > proposing this bug for later releases.
> > Meanwhile QE will try running the test with RHGS 3.5.4 and update the bug
> > with relevant information.
> > 
> > Considering the above, proposing to drop this bug from 3.5.5
> > 
> > @Sunil, Please share your thoughts
> 
> This is a long pending BZ to fix the behaviour, Do we have any plans to pick
> this up in future batch updates? if not, can we close this BZ?

I would like to propose the following:
1. Move this bug out of RHGS 3.5.5
2. QE will reproduce the same test steps with RHGS 3.5.5 ( or RHGS 3.5.4 )
3. Estimate the impact of the issue
4. Device a workaround, if possible.
5. Based on 3 and 4, we can either CLOSE this bug or target it for future releases
Hope this helps.

@Sunil, moving this bug out of 3.5.5

Comment 42 Pranav Prakash 2022-02-23 04:17:13 UTC
Hi @

Comment 49 Red Hat Bugzilla 2023-09-15 01:27:44 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days