Bug 1250855 - sharding - Renames on non-sharded files failing with ENOMEM
sharding - Renames on non-sharded files failing with ENOMEM
Product: GlusterFS
Classification: Community
Component: sharding (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Krutika Dhananjay
: Reopened, Triaged
Depends On:
Blocks: 1251106
  Show dependency treegraph
Reported: 2015-08-06 03:26 EDT by Krutika Dhananjay
Modified: 2016-06-16 09:29 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1251106 (view as bug list)
Last Closed: 2016-06-16 09:29:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2015-08-06 03:26:24 EDT
Description of problem:

Same as the subject.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Create a plain distribute volume, start it, mount it using FUSE and create a few directories and a few files under them.
2. Enable sharding on the volume.
3. cd into one of the directories.
4. Perform ls (=> readdirp)
5. In a loop, rename all of the files under the directory.

Actual results:
Renames on some of them fails with ENOMEM.
Also seen are logs of the following kind in <mount>.log per failed rename:

[2015-08-06 07:15:08.652746] E [shard.c:2191:shard_rename] 2-dis-shard: Failed to get block size from inode ctx of c3bce9d1-37f4-4f95-8257-23a5d223149d
[2015-08-06 07:15:08.652780] W [fuse-bridge.c:1725:fuse_rename_cbk] 0-glusterfs-fuse: 59652: /bin/alsa-info -> /bin/alsa-info-sharded => -1 (Cannot allocate memory)

Expected results:

Additional info:

I tried the same experiment with one change: stat-prefetch set to off at (1) and volume mounted with entry-timeout and attribute-timeout equal to 0 and was not able to hit this issue.

Turns out this is due to shard translator expecting inode ctx to be populated for each linked inode in most fop paths, failing which it would fail the operation with ENOMEM. And the only place where shard translator initialises the inode ctx is LOOKUP. Just after the graph switch in step 2, the `ls` in step 4 could cause the entries to be fetched and linked in fuse-bridge via readdirp(). This could prevent LOOKUPs on the entries from being called or reaching shard translator before RENAMEs are wound. And a failure to GET the inode ctx in memory will cause shard translator to fail the fop with ENOMEM.

The solution would be to initialise the inode ctx in shard_readdir_cbk() if it doesn't exist already.
Comment 1 Anand Avati 2015-08-06 09:09:51 EDT
REVIEW: http://review.gluster.org/11854 (features/shard: Fill inode ctx in readdir(p) callback too) posted (#1) for review on master by Krutika Dhananjay (kdhananj@redhat.com)
Comment 2 Anand Avati 2015-08-06 13:45:29 EDT
REVIEW: http://review.gluster.org/11854 (features/shard: Fill inode ctx in readdir(p) callback too) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 3 Anand Avati 2015-08-12 08:09:43 EDT
COMMIT: http://review.gluster.org/11854 committed in master by Raghavendra G (rgowdapp@redhat.com) 
commit e8ea08d9a9ca9e507919c121b3a2e56fd5f580f4
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Thu Aug 6 12:19:23 2015 +0530

    features/shard: Fill inode ctx in readdir(p) callback too
    The only place where shard translator was initialising inode ctx
    was lookup callback. But if the inodes are created and linked through
    readdirp, shard_lookup() path _may_ not be exercised before FUSE
    winds other fops on them. Since shard translator does an
    inode_ctx_get() first thing in most fops, an uninitialised ctx could
    cause it to fail the operation with ENOMEM.
    The solution would be to also initialise inode ctx if it has not been
    done already in readdir(p) callback.
    Change-Id: I3e058cd2a29bc6a69a96aaac89165c3251315625
    BUG: 1250855
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/11854
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 4 Nagaprasad Sathyanarayana 2015-10-25 11:18:59 EDT
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
Comment 5 Niels de Vos 2016-06-16 09:29:42 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.