Upstream 3.6 feature page: http://www.gluster.org/community/documentation/index.php/Features/heterogeneous-bricks * Assign subvolume weights based on total space. * Assign subvolume weights based on free space. * Assign all (or nearly all) weight to specific subvolumes.
REVIEW: http://review.gluster.org/8093 (dht: support heterogeneous brick sizes) posted (#3) for review on master by Jeff Darcy (jdarcy)
COMMIT: http://review.gluster.org/8093 committed in master by Vijay Bellur (vbellur) ------ commit 99685f18f190a73f2a46478cac0b09f4c59834b1 Author: Jeff Darcy <jdarcy> Date: Tue Jun 17 13:42:45 2014 +0000 dht: support heterogeneous brick sizes Calculation of layouts now considers the size of each brick, so that smaller bricks don't get an "unfair" share of allocations and start returning ENOSPC while the larger bricks still have plenty of space. The observation has been made that some clients might get ENOTCONN when trying to fetch disk-size information, and end up calculating layouts differently. The following meta-observations can be made. (1) This scenario is extremely unlikely in configurations with AFR. (2) The most likely consequence of this scenario is that some files will be placed sub-optimally by the client with the obsolete (non-weighted) layout. They'll still be found anyway, so this isn't a show stopper. (3) Without this patch it's *guaranteed* that some files will be placed sub-optimally, because any layout that fails to account for brick sizes is sub-optimal. (4) We shouldn't be doing fix-layout from two nodes simultaneously anyway. That's inefficient at best. Any instances of such behavior are separate bugs, which should be fixed separately. (5) In the most extreme edge case, two nodes doing weighted and non-weighted layout fixes could race and end up creating an internally inconsistent layout. This condition is still transient; it will be detected and repaired automatically the next time anyone fetches the layout. (If it's not that's also a preexisting bug that can show up in other contexts.) In conclusion, it's not the purpose of this patch to fix bugs elsewhere in DHT. Its purpose is to make life incrementally better for users who add new hardware with larger disks etc. than the older equipment. It's only one part of an ongoing process to improve layout management and repair, all the way up to support for multiple hash rings or tiering. Change-Id: I05eb6f9eface9cdaf8622e0260c8c7f29020447f BUG: 1114680 Signed-off-by: Jeff Darcy <jdarcy> Reviewed-on: http://review.gluster.org/8093 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Raghavendra G <rgowdapp> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: http://review.gluster.org/8421 (tests: weighted-rebalance.t shouldn't write to '/dev/tty') posted (#1) for review on master by Harshavardhana (harsha)
REVIEW: http://review.gluster.org/8421 (tests: weighted-rebalance.t shouldn't write to '/dev/tty') posted (#2) for review on master by Harshavardhana (harsha)
COMMIT: http://review.gluster.org/8421 committed in master by Vijay Bellur (vbellur) ------ commit 9978e61dc51d0318f92b1f2c2cbebfe9ce70b2ea Author: Harshavardhana <harsha> Date: Tue Aug 5 12:35:10 2014 -0700 tests: weighted-rebalance.t shouldn't write to '/dev/tty' On our jenkins instance "/dev/tty" doesn't exist, necessary output fails as below message ~~~ ./tests/features/weighted-rebalance.t: \ line 72: /dev/tty: No such device or address ~~~ Comment out the debugging code Change-Id: Iba29b80c8ba2dcaab3d6654d7c54332a915bffb8 BUG: 1114680 Signed-off-by: Harshavardhana <harsha> Reviewed-on: http://review.gluster.org/8421 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users