Description of problem: Accessing the snapshots via uss is posing below problems. 1) In readdir, the list of dentries is not initialized before accessing or filling it causing segfaults. 2) After enabling the USS feature when snapshots were taken when I/O is in progress .snaps directory shows empty for couple of minutes before it shows the .snaps directory contents. i.e. the updation of list of snapshots is not dynamic 3) ls -l on .snaps directory gived remote i/o error. The above issues have been handled in master by these 3 patches. http://review.gluster.org/8569 http://review.gluster.org/8150 http://review.gluster.org/8324 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/8764 (USS: initialize a list before using it.) posted (#2) for review on release-3.6 by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/8764 (USS: initialize a list before using it.) posted (#3) for review on release-3.6 by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/8767 (snapview-server: register a callback with glusterd to get notifications) posted (#1) for review on release-3.6 by Raghavendra Bhat (raghavendra)
REVIEW: http://review.gluster.org/8768 (snapview-server: get the handle if its absent before doing any fop) posted (#1) for review on release-3.6 by Raghavendra Bhat (raghavendra)
COMMIT: http://review.gluster.org/8764 committed in release-3.6 by Vijay Bellur (vbellur) ------ commit ebfb51fd77782f343215251f7641a2b31674f4a1 Author: Raghavendra G <rgowdapp> Date: Sat Aug 30 16:33:59 2014 +0530 USS: initialize a list before using it. backport of the patch http://review.gluster.org/8569 by Raghavendra G <rgowdapp> Change-Id: I7b25fdf27c6d7ff66d24925bc73d9c6681259d37 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/8764 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
COMMIT: http://review.gluster.org/8767 committed in release-3.6 by Vijay Bellur (vbellur) ------ commit 82b24d64b9dc89672e6a298648f0e3959b62b1c0 Author: Raghavendra Bhat <raghavendra> Date: Thu Sep 18 17:12:33 2014 +0530 snapview-server: register a callback with glusterd to get notifications * As of now snapview-server is polling (sending rpc requests to glusterd) to get the latest list of snapshots at some regular time intervals (non configurable). Instead of that register a callback with glusterd so that glusterd sends notifications to snapd whenever a snapshot is created/deleted and snapview-server can configure itself. rebase of the patch http://review.gluster.org/#/c/8150/ Change-Id: Iee2582b1a823d50c79233a41cf2106f458b40691 BUG: 1143961 Signed-off-by: Raghavendra Bhat <raghavendra> Reviewed-on: http://review.gluster.org/8767 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
COMMIT: http://review.gluster.org/8768 committed in release-3.6 by Vijay Bellur (vbellur) ------ commit 8f4c223c5f7a7a06c3b73dbb94e85d271bd84fb5 Author: Raghavendra Bhat <raghavendra> Date: Thu Jul 17 12:15:54 2014 +0530 snapview-server: get the handle if its absent before doing any fop * Now that NFS server does inode linking in readdirp, it can resolve the gfid (i.e. find the right inode from its inode table) present in the filehandle sent by the NFS client on which a fop came. So instead of sending the lookup on that entry, it directly sends the fop. But snapview-server does not get the handle for the entries in readdirp (because doing a lookup on each entry via gfapi would be costly. So it waits till a lookup is done on that inode, to get the handle and the fs instance and fill it in the inode context). So when NFS resoves the gfid and directly sends the fop, snapview-server will not be able to perform the fop as the inode contet would not contain the fs instance and the handle. So fops should check for the handle before doing gfapi calls. If the handle and fs instance are not present in the inode context they should get them by doing an explicit lookup on the entry. rebase of the patch http://review.gluster.org/#/c/8324/ Change-Id: I70c9c8edb2e7ddad79cf6ade3e041b9d02241cd1 BUG: 1143961 Reviewed-on: http://review.gluster.org/8768 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users