+++ This bug was initially created as a clone of Bug #1348904 +++ +++ This bug was initially created as a clone of Bug #1344908 +++ Description of problem: ======================= In a scenario, where data is copied from the snapshot from .snaps directory using (uss) to the master mount. The entries gets synced to slave but data remains 0 in size. Master: ======= [root@tia master]# cp -rf /mnt/master/.snaps/geo_snap_multi_text_glusterfs_create_GMT-2016.06.12-07.26.06/thread0 . [root@tia master]# [root@tia master]# ls thread0 [root@tia master]# [root@tia master]# cd thread0/ [root@tia thread0]# ls level00 level01 level02 [root@tia thread0]# cd level00/ [root@tia level00]# ll total 774 -rw-r--r--. 1 root root 81184 Jun 12 2016 575c5b2e%%179OT6LH9E -rw-r--r--. 1 root root 189899 Jun 12 2016 575c5b2e%%8AKXW79FK8 -rw-r--r--. 1 root root 186345 Jun 12 2016 575c5b2e%%E4UB060CSR -rw-r--r--. 1 root root 171280 Jun 12 2016 575c5b2e%%VGRDMPC2PL -rw-r--r--. 1 root root 158768 Jun 12 2016 575c5b2e%%W7WO0PW4W9 drwxr-xr-x. 3 root root 4096 Jun 12 2016 level10 [root@tia level00]# Slave: ====== [root@tia slave]# ls thread0 [root@tia slave]# cd thread0/ [root@tia thread0]# ls level00 level01 level02 [root@tia thread0]# cd level00/ [root@tia level00]# ll total 4 -rw-r--r--. 1 root root 0 Jun 12 2016 575c5b2e%%179OT6LH9E -rw-r--r--. 1 root root 0 Jun 12 2016 575c5b2e%%8AKXW79FK8 -rw-r--r--. 1 root root 0 Jun 12 2016 575c5b2e%%E4UB060CSR -rw-r--r--. 1 root root 0 Jun 12 2016 575c5b2e%%VGRDMPC2PL -rw-r--r--. 1 root root 0 Jun 12 2016 575c5b2e%%W7WO0PW4W9 drwxr-xr-x. 3 root root 4096 Jun 12 2016 level10 [root@tia level00]# Version-Release number of selected component (if applicable): ============================================================= How reproducible: ================= Always Steps to Reproduce: ================== 1. Create geo-rep session between master and slave 2. Create data on Master volume 3. It will sync to slave 4. Pause the geo-rep session 5. Take a snapshot of Master volume 6. Start the geo-rep session 7. Remove the data from Master 8. Data will be removed from Slave too 9. Enable uss on Master 10. Activate the snapshot 11. Copy back the data from .snaps/<snapshot> to master root 12. Entries gets created at slave but data doesn't sync Actual results: =============== Entries gets created at slave but data doesn't sync Expected results: ================= Data along with entries should also get sync to slave
REVIEW: http://review.gluster.org/14776 (feature/gfid-access: Fix nameless lookup on ".gfid") posted (#1) for review on release-3.8 by Kotresh HR (khiremat)
COMMIT: http://review.gluster.org/14776 committed in release-3.8 by Raghavendra G (rgowdapp) ------ commit c6f49213dc04714699691f87bde614c6406c16d5 Author: Kotresh HR <khiremat> Date: Wed Jun 22 13:05:10 2016 +0530 feature/gfid-access: Fix nameless lookup on ".gfid" Backport of http://review.gluster.org/14773 Problem: In geo-replication, if the data copied from .snaps directory to the master, the first set of copy after uss is enabled doesn't get sync to slave. Cause: Enabling uss results in graph switch. So when the lookup comes on "0x00...0d/gfid1" on new graph, ("0x00...0d' being the gfid of virtual directory ".gfid"), it fails as gfid-access xlator doesn't handle it. Fix: Handle nameless lookup on ".gfid" in gfid-access xlator. Change-Id: I32be0064e8fd58068646dbf662432f4a3da14e77 BUG: 1349274 Signed-off-by: Kotresh HR <khiremat> (cherry picked from commit b37c6d9088851b2ef83ce4e28af642892e5fd268) Reviewed-on: http://review.gluster.org/14776 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra G <rgowdapp>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.1, please open a new bug report. glusterfs-3.8.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/156 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user