Bug 1546717
Summary: | Removing directories from multiple clients throws ESTALE errors | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Prasad Desala <tdesala> | |
Component: | md-cache | Assignee: | Raghavendra G <rgowdapp> | |
Status: | CLOSED ERRATA | QA Contact: | Prasad Desala <tdesala> | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.4 | CC: | rallan, rgowdapp, rhinduja, rhs-bugs, sheggodu, storage-qa-internal, tdesala | |
Target Milestone: | --- | Keywords: | Regression | |
Target Release: | RHGS 3.4.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-9 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1566303 (view as bug list) | Environment: | ||
Last Closed: | 2018-09-04 06:42:45 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1503137, 1566303, 1571593 |
Description
Prasad Desala
2018-02-19 12:03:53 UTC
From fuse-dump, 2018-03-01T11:25:53.579099719+05:30 "GLUSTER\xf5" RMDIR {Len:44 Opcode:11 Unique:3615 Nodeid:140513143433312 Uid:0 Gid:0 Pid:8667 Padding:0} 110 2018-03-01T11:25:53.762465001+05:30 "GLUSTER\xf5" {Len:16 Error:-116 Unique:3615} 2018-03-01T11:25:53.763381283+05:30 "GLUSTER\xf5" LOOKUP {Len:44 Opcode:1 Unique:3616 Nodeid:140513143433312 Uid:0 Gid:0 Pid:8667 Padding:0} 110 2018-03-01T11:25:53.763599918+05:30 "GLUSTER\xf5" {Len:144 Error:0 Unique:3616} {Nodeid:140513144416608 Generation:0 EntryValid:1 AttrValid:1 EntryValidNsec:0 AttrValidNsec:0 Attr:{Ino:13658219387318354837 Size:4096 Blocks:8 Atime:1519883211 Mtime:1519883211 Ctime:1519883211 Atimensec:351637895 Mtimensec:351637895 Ctimensec:403637612 Mode:16877 Nlink:2 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:0}} 2018-03-01T11:25:53.763714221+05:30 "GLUSTER\xf5" RMDIR {Len:44 Opcode:11 Unique:3617 Nodeid:140513143433312 Uid:0 Gid:0 Pid:8667 Padding:0} 110 2018-03-01T11:25:53.933181928+05:30 "GLUSTER\xf5" {Len:16 Error:-116 Unique:3617} Note that after RMDIR (unique:3615) failed with ESTALE, Lookup on the same path (unique:3616) done by VFS retry logic returned success. Due to this another RMDIR (unique:3617) was attempted which again failed with ESTALE. Failure of second rmdir forced VFS to give up failing RMDIR with ESTALE error. Note that since first RMDIR failed with ESTALE, lookup (unique:3616) should've returned ENOENT, but its not. I think this is the bug. Had the lookup returned ENOENT, rmdir would've failed with ENOENT and cmd rm would've ignored it. I am suspecting the lookup succeeded due to stale cache in md-cache. Note that the md-cache wouldn't have witnessed RMDIR, so most likely it keeps the dentry alive. I am going to repeat the test with md-cache turned off. With md-cache turned off, ESTALE errors are no longer seen and rm completes successfully. The fix is, md-cache should purge the cache if any fop on an inode returns ESTALE error. Verified this BZ on glusterfs version 3.12.2-9.el7rhgs.x86_64. Followed the same steps as in the description on a data set having a) deep directory without files b) deep directories with files. rm -rf command didn't throw any ESTALE errors. Hence, moving this BZ to Verified. *** Bug 1577796 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 |