Bug 1258353 - Sharding - Unlink of VM images can sometimes fail with EINVAL
Sharding - Unlink of VM images can sometimes fail with EINVAL
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: sharding (Show other bugs)
3.7.3
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: bugs@gluster.org
bugs@gluster.org
: Triaged
Depends On: 1258334
Blocks: Gluster-HC-1 glusterfs-3.7.4
  Show dependency treegraph
 
Reported: 2015-08-31 03:22 EDT by Krutika Dhananjay
Modified: 2015-09-09 05:41 EDT (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.7.4
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1258334
Environment:
Last Closed: 2015-09-09 05:41:03 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2015-08-31 03:22:16 EDT
+++ This bug was initially created as a clone of Bug #1258334 +++

Description of problem:

Thanks to Paul Cuzner for reporting this issue.
rm -rf on VM images in sharded volumes sometimes fails with EINVAL.

rhev-data-center-mnt-glusterSD-gprfc085-glfs.rhev-rhss.lab:vmdomain.log:[2015-08-30 22:27:32.842940] W [fuse-bridge.c:1292:fuse_unlink_cbk] 0-glusterfs-fuse: 22544605: UNLINK() /2ad90339-5c1b-4b0e-b728-3df651ecd025/images/_remouLgHrB/e39cb165-831a-46cb-88da-e26cc93d9399 => -1 (Invalid argument)
rhev-data-center-mnt-glusterSD-gprfc085-glfs.rhev-rhss.lab:vmdomain.log:[2015-08-30 22:27:59.238742] W [fuse-bridge.c:1292:fuse_unlink_cbk] 0-glusterfs-fuse: 22545653: UNLINK() /2ad90339-5c1b-4b0e-b728-3df651ecd025/images/_remotGxj4J/aaa115ea-7577-4358-856e-a0101b7b94a2 => -1 (Invalid argument)

Turns out the bug is in shard_unlink_shards_do() wherein the loop that winds unlinks on individual shards must check for the inode associated with every shard to exist in memory (local->inode_list[]) before winding unlink on it.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 1 Anand Avati 2015-08-31 05:39:18 EDT
REVIEW: http://review.gluster.org/12061 (features/shard: Fix unlink failure due to non-existent shard(s)) posted (#1) for review on release-3.7 by Krutika Dhananjay (kdhananj@redhat.com)
Comment 2 Anand Avati 2015-08-31 11:09:50 EDT
COMMIT: http://review.gluster.org/12061 committed in release-3.7 by Kaushal M (kaushal@redhat.com) 
------
commit 300a69669aa6e9ebb16e5fc8326ac57c3e2d8937
Author: Krutika Dhananjay <kdhananj@redhat.com>
Date:   Mon Aug 31 12:43:36 2015 +0530

    features/shard: Fix unlink failure due to non-existent shard(s)
    
            Backport of: http://review.gluster.org/#/c/12059/
    
    Unlink of a sharded file with holes was leading to EINVAL errors
    because it was being wound on non-existent shards (those blocks that
    fall in the hole region). loc->inode was NULL in these cases and
    dht_unlink used to fail the FOP with EINVAL for failure to fetch
    cached subvol for the inode.
    
    The fix involves winding unlink on only those shards whose corresponding
    inodes exist in memory.
    
    Change-Id: I1e5d492a2e60491601da23f64a5d0089e536b305
    BUG: 1258353
    Signed-off-by: Krutika Dhananjay <kdhananj@redhat.com>
    Reviewed-on: http://review.gluster.org/12061
    Reviewed-by: Pranith Kumar Karampuri <pkarampu@redhat.com>
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 3 Kaushal 2015-09-09 05:41:03 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.4, please open a new bug report.

glusterfs-3.7.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12496
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.