This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1258334 - Sharding - Unlink of VM images can sometimes fail with EINVAL
Sharding - Unlink of VM images can sometimes fail with EINVAL
Product: GlusterFS
Classification: Community
Component: sharding (Show other bugs)
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Krutika Dhananjay
: Reopened, Triaged
Depends On:
Blocks: 1258353
  Show dependency treegraph
Reported: 2015-08-31 02:25 EDT by Krutika Dhananjay
Modified: 2016-06-16 09:34 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1258353 (view as bug list)
Last Closed: 2016-06-16 09:34:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2015-08-31 02:25:57 EDT
Description of problem:

Thanks to Paul Cuzner for reporting this issue.
rm -rf on VM images in sharded volumes sometimes fails with EINVAL.

rhev-data-center-mnt-glusterSD-gprfc085-glfs.rhev-rhss.lab:vmdomain.log:[2015-08-30 22:27:32.842940] W [fuse-bridge.c:1292:fuse_unlink_cbk] 0-glusterfs-fuse: 22544605: UNLINK() /2ad90339-5c1b-4b0e-b728-3df651ecd025/images/_remouLgHrB/e39cb165-831a-46cb-88da-e26cc93d9399 => -1 (Invalid argument)
rhev-data-center-mnt-glusterSD-gprfc085-glfs.rhev-rhss.lab:vmdomain.log:[2015-08-30 22:27:59.238742] W [fuse-bridge.c:1292:fuse_unlink_cbk] 0-glusterfs-fuse: 22545653: UNLINK() /2ad90339-5c1b-4b0e-b728-3df651ecd025/images/_remotGxj4J/aaa115ea-7577-4358-856e-a0101b7b94a2 => -1 (Invalid argument)

Turns out the bug is in shard_unlink_shards_do() wherein the loop that winds unlinks on individual shards must check for the inode associated with every shard to exist in memory (local->inode_list[]) before winding unlink on it.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:
Comment 1 Anand Avati 2015-08-31 03:22:36 EDT
REVIEW: (features/shard: Fix unlink failure due to non-existent shard(s)) posted (#1) for review on master by Krutika Dhananjay (
Comment 2 Anand Avati 2015-08-31 03:32:56 EDT
REVIEW: (features/shard: Fix unlink failure due to non-existent shard(s)) posted (#2) for review on master by Krutika Dhananjay (
Comment 3 Anand Avati 2015-08-31 11:08:18 EDT
COMMIT: committed in master by Vijay Bellur ( 
commit 7566c94633b602156755297493fad1d24d1ba52f
Author: Krutika Dhananjay <>
Date:   Mon Aug 31 12:43:36 2015 +0530

    features/shard: Fix unlink failure due to non-existent shard(s)
    Unlink of a sharded file with holes was leading to EINVAL errors
    because it was being wound on non-existent shards (those blocks that
    fall in the hole region). loc->inode was NULL in these cases and
    dht_unlink used to fail the FOP with EINVAL for failure to fetch
    cached subvol for the inode.
    The fix involves winding unlink on only those shards whose corresponding
    inodes exist in memory.
    Change-Id: I993ff70cab4b22580c772a9c74fc19ac893a03fc
    BUG: 1258334
    Signed-off-by: Krutika Dhananjay <>
    Reviewed-by: Pranith Kumar Karampuri <>
    Tested-by: NetBSD Build System <>
    Tested-by: Gluster Build System <>
    Reviewed-by: Vijay Bellur <>
Comment 4 Nagaprasad Sathyanarayana 2015-10-25 10:59:36 EDT
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
Comment 5 Niels de Vos 2016-06-16 09:34:27 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.


Note You need to log in before you can comment on or make changes to this bug.