+++ This bug was initially created as a clone of Bug #1167789 +++ Description of problem: misleading message from logs about available disk space Version-Release number of selected component (if applicable): 3.6.0.33-1.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume and data 2. add one more brick , make sure that there is some space discrepancy 3. start the rebalance without force option gluster volume rebalance <vol> start 4. check the status of rebalance and make sure that some skipped files are there Actual results: 2014-11-25 09:10:32.433141] I [dht-rebalance.c:902:dht_migrate_file] 0-gs-dht: /file9: attempting to move from gs-client-0 to gs-client-4 [2014-11-25 09:10:32.437164] W [MSGID: 109023] [dht-rebalance.c:568:__dht_check_free_space] 0-gs-dht: data movement attempted from node (gs-client-0:209540896) with higher disk space to a node (gs-client-4:209540960) with lesser disk space, file { blocks:2048, name:(/file9) } [2014-11-25 09:10:32.438441] I [dht-common.c:1563:dht_lookup_everywhere_cbk] 0-gs-dht: attempting deletion of stale linkfile /file10 on gs-client-4 (hashed subvol is gs-client-6) Here available space on source is gs-client-0:209540896 and on destination is gs-client-4:209540960 but log says source has higher space and destination has lesser which is misleading . since calculation of free space is done in the following way if ((dst_statfs_blocks - stbuf->ia_blocks) < (src_statfs_blocks + stbuf->ia_blocks)) the proper values has to be printed in the logs --- Additional comment from RHEL Product and Program Management on 2014-11-25 07:53:37 EST --- Since this issue was entered in bugzilla, the release flag has been set to ? to ensure that it is properly evaluated for this release. --- Additional comment from Sakshi on 2015-04-22 09:03:44 EDT --- What is the size of the brick? Also what do you mean by space discrepancy? Should the brick added later be lesser in size compared to the existing bricks?
REVIEW: http://review.gluster.org/14612 (dht: proper log message if data migration skipped due to space) posted (#1) for review on master by Sakshi Bansal
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#1) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#2) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#3) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#4) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#5) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#6) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#7) for review on master by ankitraj
REVIEW: http://review.gluster.org/15345 (dht: Proper log message if data migration is skipped) posted (#8) for review on master by ankitraj
COMMIT: http://review.gluster.org/15345 committed in master by Raghavendra G (rgowdapp) ------ commit ed430fc04e57c89d08cfdd1bb5e408c5baf53adf Author: ankit <anraj> Date: Tue Aug 30 12:55:32 2016 +0530 dht: Proper log message if data migration is skipped Change-Id: Id0af15a2aec96bdbe675b4c959b56f0fc8e72504 BUG: 1341948 Signed-off-by: ankit <anraj> Reviewed-on: http://review.gluster.org/15345 Tested-by: ankitraj Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> CentOS-regression: Gluster Build System <jenkins.org>
*** Bug 1167789 has been marked as a duplicate of this bug. ***
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/