Description of problem: Readdir(p) on '/' in sharded volume can sometimes lead to infinite calls to readdirp at same set of offsets circling back to offset=0 all over again. RCA: DHT performs readdirp one subvol at a time and the entries are ordered according to their offsets in ascending order. At some point, when /.shard is the last of the several entries read, and DHT unwinds the call to shard xlator, it deletes the entry corresponding to "/.shard" from the list as it is not supposed to be exposed on the mount. Shard xlator then unwinds the call with the rest of the entries to parent xlator. When the readdirp result reaches readdir-ahead translator, it winds the next readdir at the last entry's offset (which is at an offset less than that of "/.shard"). In this iteration, DHT fetches "/.shard", shard xlator ignores it and unwinds with no entries. In such cases, readdir-ahead creates a new stub for readdirp with offset = 0. When the call is resumed, it would again lead to the same events described above getting executed again forever, causing the mount to perceive a hang. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/10809 (features/shard: Fix issue with readdir(p) fop) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/10809 (features/shard: Fix issue with readdir(p) fop) posted (#2) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/10809 (features/shard: Fix issue with readdir(p) fop) posted (#3) for review on master by Krutika Dhananjay (kdhananj)
REVIEW: http://review.gluster.org/10809 (features/shard: Fix issue with readdir(p) fop) posted (#4) for review on master by Krutika Dhananjay (kdhananj)
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user