REVIEW: http://review.gluster.org/12156 (dht/remove-brick: Avoid data loss for hard link migration) posted (#1) for review on release-3.7 by Susant Palai (spalai)
REVIEW: http://review.gluster.org/12156 (dht/remove-brick: Avoid data loss for hard link migration) posted (#2) for review on release-3.7 by Susant Palai (spalai)
REVIEW: http://review.gluster.org/12156 (dht/remove-brick: Avoid data loss for hard link migration) posted (#3) for review on release-3.7 by Susant Palai (spalai)
COMMIT: http://review.gluster.org/12156 committed in release-3.7 by Raghavendra G (rgowdapp) ------ commit 23e522eea17e15b37d395e2005139dd3d5a9e3a1 Author: Susant Palai <spalai> Date: Fri Sep 4 05:14:05 2015 -0400 dht/remove-brick: Avoid data loss for hard link migration Problem: If the hashed subvol of a file has reached cluster.min-free-disk, for a create opertaion a linkto file will be created on the hashed and the data file will be created on some other brick. For creation of the linkfile we populate the dictionary with linkto key and value as the cached subvol. After successful linkto file creation, the linkto-key-value pair is not deleted form the dictionary and hence, the data file will also have linkto xattr which points to itself.This looks something like this. client-0 client-1 -------T file rwx------file linkto.xattr=client-1 linkto.xattr=client-1 Now coming to the data loss part. Hardlink migration highly depend on this linkto xattr on the data file. This value should be the new hashed subvol of the first hardlink encountered post fix-layout. But when it tries to read the linkto xattr it gets the same target as where it is sitting. Now the source and destination are same for migration. At the end of migration the source file is truncated and deleted, which in this case is the destination and also the only data file it self resulting in data loss. BUG: 1262197 Change-Id: I5338a5704ac60ca9afb278977e178319266a0cc0 Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/12105 Reviewed-by: N Balachandran <nbalacha> Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp> Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/12156 Tested-by: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report. glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report. glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user