Description of problem: OS installation on a vm image in a sharded volume hangs at some point. Statedump on the fuse client taken at several points reveals that readv() fop is hung: <statedump> ... ... [global.callpool.stack.1.frame.10] frame=0x7f0b0bcfd150 ref_count=0 translator=dis-rep-shard complete=0 <==== complete is 0. parent=dis-rep-trace wind_from=trace_readv wind_to=FIRST_CHILD(this)->fops->readv unwind_to=trace_readv_cbk ... ... [global.callpool.stack.1.frame.14] frame=0x7f0b0bcd6f40 ref_count=1 translator=dis-rep complete=0 <======== complete is 0 parent=fuse wind_from=fuse_readv_resume wind_to=FIRST_CHILD(this)->fops->readv unwind_to=fuse_readv_cbk ... ... </statedump> This was found to be due to call_count being reduced to -1 at the end of shard_common_lookup_shards() because of which this particular stack never gets unwound till FUSE: (gdb) p (call_frame_t *)0x7f0b0bcfd150 $1 = (call_frame_t *) 0x7f0b0bcfd150 (gdb) p (shard_local_t *)$1->local $2 = (shard_local_t *) 0x7f0b0086310c (gdb) p $2->call_count $3 = -1 (gdb) p $2->eexist_count $4 = 1 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
http://review.gluster.org/#/c/11770/
REVIEW: http://review.gluster.org/11778 (features/shard: Fix block size get from xdata) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/11770 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit d051bd14223d12ca8eaea85f6988ff41e5eef2c3 Author: Krutika Dhananjay <kdhananj> Date: Tue Jul 28 11:25:55 2015 +0530 features/shard: (Re)initialize local->call_count before winding lookup Change-Id: I616409c38b86c0acf1817b3472a1fed73db293f8 BUG: 1247108 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/11770 Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Gluster Build System <jenkins.com>
COMMIT: http://review.gluster.org/11778 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 71641e36734c86ac14c62caf57301e2214712502 Author: Pranith Kumar K <pkarampu> Date: Tue Jul 28 18:38:56 2015 +0530 features/shard: Fix block size get from xdata Instead of using dict_get_ptr, dict_get_uint64 was used. If the first byte of the value is '\0' then size is returned as 0 because strtoull is used in data_to_uint64. This will make it seem like the file is not sharded at all. BUG: 1247108 Change-Id: Id1fc291198ac94b20ae645c04a51db78bab51993 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/11778 Reviewed-by: Krutika Dhananjay <kdhananj> Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com>
REVIEW: http://review.gluster.org/11791 (features/shard: Create /.shard with 0777 permissions (for now)) posted (#1) for review on master by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/11791 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit b467af0e99b39ef708420d3f7f6696b0ca618512 Author: Krutika Dhananjay <kdhananj> Date: Mon Jul 27 12:30:19 2015 +0530 features/shard: Create /.shard with 0777 permissions (for now) Change-Id: I4e5692f06a189230825f0aeb6487b103bfb66fe1 BUG: 1247108 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/11791 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
REVIEW: http://review.gluster.org/11809 (cluster/afr: Make [f]xattrop metadata transaction) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user