Bug 1247108 - sharding - OS installation on vm image hangs on a sharded volume
sharding - OS installation on vm image hangs on a sharded volume
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: sharding (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Krutika Dhananjay
bugs@gluster.org
: Reopened, Triaged
Depends On:
Blocks: 1247833
  Show dependency treegraph
 
Reported: 2015-07-27 07:01 EDT by Krutika Dhananjay
Modified: 2016-06-16 09:26 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1247833 (view as bug list)
Environment:
Last Closed: 2016-06-16 09:26:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2015-07-27 07:01:51 EDT
Description of problem:
OS installation on a vm image in a sharded volume hangs at some point.

Statedump on the fuse client taken at several points reveals that readv() fop is hung:

<statedump>
...
...

[global.callpool.stack.1.frame.10]
frame=0x7f0b0bcfd150
ref_count=0
translator=dis-rep-shard
complete=0                  <==== complete is 0.
parent=dis-rep-trace
wind_from=trace_readv
wind_to=FIRST_CHILD(this)->fops->readv
unwind_to=trace_readv_cbk

...
...

[global.callpool.stack.1.frame.14]
frame=0x7f0b0bcd6f40
ref_count=1
translator=dis-rep
complete=0                <======== complete is 0
parent=fuse
wind_from=fuse_readv_resume
wind_to=FIRST_CHILD(this)->fops->readv
unwind_to=fuse_readv_cbk
...
...
</statedump>

This was found to be due to call_count being reduced to -1 at the end of shard_common_lookup_shards() because of which this particular stack never gets unwound till FUSE:

(gdb) p (call_frame_t *)0x7f0b0bcfd150
$1 = (call_frame_t *) 0x7f0b0bcfd150
(gdb) p (shard_local_t *)$1->local
$2 = (shard_local_t *) 0x7f0b0086310c
(gdb) p $2->call_count
$3 = -1
(gdb) p $2->eexist_count 
$4 = 1


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 1 Krutika Dhananjay 2015-07-28 03:37:32 EDT
http://review.gluster.org/#/c/11770/
Comment 2 Anand Avati 2015-07-28 09:44:12 EDT
REVIEW: http://review.gluster.org/11778 (features/shard: Fix block size get from xdata) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 3 Anand Avati 2015-07-28 21:53:52 EDT
COMMIT: http://review.gluster.org/11770 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit d051bd14223d12ca8eaea85f6988ff41e5eef2c3
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Tue Jul 28 11:25:55 2015 +0530

    features/shard: (Re)initialize local->call_count before winding lookup
    
    Change-Id: I616409c38b86c0acf1817b3472a1fed73db293f8
    BUG: 1247108
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/11770
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 4 Anand Avati 2015-07-29 04:43:06 EDT
COMMIT: http://review.gluster.org/11778 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit 71641e36734c86ac14c62caf57301e2214712502
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Tue Jul 28 18:38:56 2015 +0530

    features/shard: Fix block size get from xdata
    
    Instead of using dict_get_ptr, dict_get_uint64 was used. If the first byte of
    the value is '\0' then size is returned as 0 because strtoull is used in
    data_to_uint64. This will make it seem like the file is not sharded at all.
    
    BUG: 1247108
    Change-Id: Id1fc291198ac94b20ae645c04a51db78bab51993
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/11778
    Reviewed-by: Krutika Dhananjay <kdhananj@redhat.com>
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 5 Anand Avati 2015-07-29 06:56:03 EDT
REVIEW: http://review.gluster.org/11791 (features/shard: Create /.shard with 0777 permissions (for now)) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)
Comment 6 Anand Avati 2015-07-30 03:27:15 EDT
COMMIT: http://review.gluster.org/11791 committed in master by Pranith Kumar Karampuri (pkarampu@redhat.com) 
------
commit b467af0e99b39ef708420d3f7f6696b0ca618512
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Mon Jul 27 12:30:19 2015 +0530

    features/shard: Create /.shard with 0777 permissions (for now)
    
    Change-Id: I4e5692f06a189230825f0aeb6487b103bfb66fe1
    BUG: 1247108
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/11791
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
Comment 7 Anand Avati 2015-07-30 14:05:35 EDT
REVIEW: http://review.gluster.org/11809 (cluster/afr: Make [f]xattrop metadata transaction) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 8 Nagaprasad Sathyanarayana 2015-10-25 10:45:55 EDT
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
Comment 9 Niels de Vos 2016-06-16 09:26:51 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.