REVIEW: http://review.gluster.org/7124 (DHT/Rebalance : Hard link Migratin Failure) posted (#1) for review on master by susant palai (spalai)
REVIEW: http://review.gluster.org/7124 (DHT/Rebalance : Hard link Migratin Failure) posted (#2) for review on master by susant palai (spalai)
REVIEW: http://review.gluster.org/7124 (DHT/Rebalance : Hard link Migratin Failure) posted (#3) for review on master by susant palai (spalai)
REVIEW: http://review.gluster.org/7124 (DHT/Rebalance : Hard link Migration Failure) posted (#4) for review on master by susant palai (spalai)
COMMIT: http://review.gluster.org/7124 committed in master by Vijay Bellur (vbellur) ------ commit 9a3de81fe5c42c0495dccc5877cecbc2edb81f3c Author: Susant Palai <spalai> Date: Tue Feb 18 13:03:50 2014 +0000 DHT/Rebalance : Hard link Migration Failure Probelm : __is_file_migratable used to return ENOTSUP for all the cases. Hence, it will add to the failure count. And the remove-brick status will show failure for all the files. Solution : Added 'ret = -2' to gf_defrag_handle_hardlink to be deemed as success. Otherwise dht_migrate_file will try to migrate each of the hard link, which not intended. Change-Id: Iff74f6634fb64e4b91fc5d016e87ff1290b7a0d6 BUG: 1066798 Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/7124 Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: http://review.gluster.org/7943 (Rebalance: Avoid setting all xattrs) posted (#1) for review on master by susant palai (spalai)
REVIEW: http://review.gluster.org/7943 (Rebalance: Avoid setting other component's xattrs) posted (#2) for review on master by susant palai (spalai)
COMMIT: http://review.gluster.org/7943 committed in master by Vijay Bellur (vbellur) ------ commit 4e1ca1be6c26846e876d4181c9f2adea37856ded Author: Susant Palai <spalai> Date: Sun Jun 1 04:37:22 2014 -0400 Rebalance: Avoid setting other component's xattrs Problem 1: In "gf_defrag_handle_hardlink" we used to do setxattr on internal afr keys. Which lead to afr aborting the op saying "operation not supported". Solution : Sending a new xattr with only required keys. Problem 2: Hardlink migration tries to create linkto files for 2nd to (n-1)th hardlink of a file on their respective hashed_subvolumes. It may so happen that the linkto file already exists on the hashed subvolume may be due to an earlier lookup or hashed subvolume on the older graph is same as that on the new graph. Hence any new link call may fail with EEXIST. Solution: Will log the message with DEBUG level for EEXIST . Otherwise will log with ERROR level. Change-Id: I51f9bfc8cf5b9d8e94a9d614391662fddc0874d4 BUG: 1066798 Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/7943 Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users