Bug 1258334 - Sharding - Unlink of VM images can sometimes fail with EINVAL
Summary: Sharding - Unlink of VM images can sometimes fail with EINVAL
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: sharding
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1258353
TreeView+ depends on / blocked
 
Reported: 2015-08-31 06:25 UTC by Krutika Dhananjay
Modified: 2016-06-16 13:34 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1258353 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:34:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Krutika Dhananjay 2015-08-31 06:25:57 UTC
Description of problem:

Thanks to Paul Cuzner for reporting this issue.
rm -rf on VM images in sharded volumes sometimes fails with EINVAL.

rhev-data-center-mnt-glusterSD-gprfc085-glfs.rhev-rhss.lab:vmdomain.log:[2015-08-30 22:27:32.842940] W [fuse-bridge.c:1292:fuse_unlink_cbk] 0-glusterfs-fuse: 22544605: UNLINK() /2ad90339-5c1b-4b0e-b728-3df651ecd025/images/_remouLgHrB/e39cb165-831a-46cb-88da-e26cc93d9399 => -1 (Invalid argument)
rhev-data-center-mnt-glusterSD-gprfc085-glfs.rhev-rhss.lab:vmdomain.log:[2015-08-30 22:27:59.238742] W [fuse-bridge.c:1292:fuse_unlink_cbk] 0-glusterfs-fuse: 22545653: UNLINK() /2ad90339-5c1b-4b0e-b728-3df651ecd025/images/_remotGxj4J/aaa115ea-7577-4358-856e-a0101b7b94a2 => -1 (Invalid argument)

Turns out the bug is in shard_unlink_shards_do() wherein the loop that winds unlinks on individual shards must check for the inode associated with every shard to exist in memory (local->inode_list[]) before winding unlink on it.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Anand Avati 2015-08-31 07:22:36 UTC
REVIEW: http://review.gluster.org/12059 (features/shard: Fix unlink failure due to non-existent shard(s)) posted (#1) for review on master by Krutika Dhananjay (kdhananj)

Comment 2 Anand Avati 2015-08-31 07:32:56 UTC
REVIEW: http://review.gluster.org/12059 (features/shard: Fix unlink failure due to non-existent shard(s)) posted (#2) for review on master by Krutika Dhananjay (kdhananj)

Comment 3 Anand Avati 2015-08-31 15:08:18 UTC
COMMIT: http://review.gluster.org/12059 committed in master by Vijay Bellur (vbellur) 
------
commit 7566c94633b602156755297493fad1d24d1ba52f
Author: Krutika Dhananjay <kdhananj>
Date:   Mon Aug 31 12:43:36 2015 +0530

    features/shard: Fix unlink failure due to non-existent shard(s)
    
    Unlink of a sharded file with holes was leading to EINVAL errors
    because it was being wound on non-existent shards (those blocks that
    fall in the hole region). loc->inode was NULL in these cases and
    dht_unlink used to fail the FOP with EINVAL for failure to fetch
    cached subvol for the inode.
    
    The fix involves winding unlink on only those shards whose corresponding
    inodes exist in memory.
    
    Change-Id: I993ff70cab4b22580c772a9c74fc19ac893a03fc
    BUG: 1258334
    Signed-off-by: Krutika Dhananjay <kdhananj>
    Reviewed-on: http://review.gluster.org/12059
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 4 Nagaprasad Sathyanarayana 2015-10-25 14:59:36 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 5 Niels de Vos 2016-06-16 13:34:27 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.