Description of problem: After triggering the rebalance process, simultaneously, truncate command was passed on the mount point, rebalance completed successfully and truncate command did not throw any error, but on doing "# ll ." it was noticed that file size of many files was not truncated to zero as per the truncate command. Version-Release number of selected component (if applicable): 3.12.2-22 How reproducible: 2/2 Steps to Reproduce: 1. Create a distributed-replicated volume (e.g. 3*3) 2. Start and mount the volume on client node. 3. Add brick to the volume using # gluster v add-brick volname replica 3 brick10 brick11 brick12 4. From the client node create files on the mount point e.g. # for i in {1..8000}; do dd if=/dev/urandom of=file_$i bs=1M count=1; done 5. Trigger rebalance. 6. While rebalance is still in progress, start truncating the files from the mount point e.g. # for i in {1..8000}; do truncate -s 0 file_$i; done 7. Wait for the migration to complete. 8. Now from the mount point check the size of all the files. Actual results: File size for many files was not truncated to zero. Expected results: All the files should have size zero. Additional info: Sos reports and gluster-health-check reports will be shared.
As the severity associated with this bug is medium and there are no customer issues reported with this bug. Again the last time the issue was reported on 3.5.2. Based on the facts, proposing this bug for later releases. Meanwhile QE will try running the test with RHGS 3.5.4 and update the bug with relevant information. Considering the above, proposing to drop this bug from 3.5.5 @Sunil, Please share your thoughts
(In reply to SATHEESARAN from comment #10) > As the severity associated with this bug is medium and there are no customer > issues reported with this bug. > Again the last time the issue was reported on 3.5.2. Based on the facts, > proposing this bug for later releases. > Meanwhile QE will try running the test with RHGS 3.5.4 and update the bug > with relevant information. > > Considering the above, proposing to drop this bug from 3.5.5 > > @Sunil, Please share your thoughts This is a long pending BZ to fix the behaviour, Do we have any plans to pick this up in future batch updates? if not, can we close this BZ?
(In reply to Sunil Kumar Acharya from comment #11) > (In reply to SATHEESARAN from comment #10) > > As the severity associated with this bug is medium and there are no customer > > issues reported with this bug. > > Again the last time the issue was reported on 3.5.2. Based on the facts, > > proposing this bug for later releases. > > Meanwhile QE will try running the test with RHGS 3.5.4 and update the bug > > with relevant information. > > > > Considering the above, proposing to drop this bug from 3.5.5 > > > > @Sunil, Please share your thoughts > > This is a long pending BZ to fix the behaviour, Do we have any plans to pick > this up in future batch updates? if not, can we close this BZ? I would like to propose the following: 1. Move this bug out of RHGS 3.5.5 2. QE will reproduce the same test steps with RHGS 3.5.5 ( or RHGS 3.5.4 ) 3. Estimate the impact of the issue 4. Device a workaround, if possible. 5. Based on 3 and 4, we can either CLOSE this bug or target it for future releases Hope this helps. @Sunil, moving this bug out of 3.5.5
Hi @
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 365 days