Bug 1387494
Summary: | Files not deleted from arbiter brick after deletion from the mount point. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ravishankar N <ravishankar> |
Component: | arbiter | Assignee: | Ravishankar N <ravishankar> |
Status: | CLOSED WONTFIX | QA Contact: | Karan Sandha <ksandha> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.2 | CC: | amukherj, bugs, ksandha, mmuench, nchilaka, ravishankar, rcyriac, rhinduja, rhs-bugs, sarumuga, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
If the data bricks of the arbiter volume get filled up, further creation of new entries might succeed in the arbiter brick despite failing on the data bricks with ENOSPC and the application (client) itself receiving an error on the mount point. Thus the arbiter bricks might have more entries. Now when an rm -rf is performed from the client, if the readdir (as a part of rm -rf) gets served on the data brick, it might delete only those entries and not the ones present only in the arbiter. When the rmdir on the parent dir of these entries comes, it won't succeed on the arbiter (errors out with ENOTEMPTY), leading to it not being removed from arbiter.
Workaround: If the deletion from the mount did not complain but the bricks still contain the directories, we would need to remove the directory and its associated gfid symlink from the back end. If the directory contains files, they (file + its gfid hardlink) would need to be removed too.
|
Story Points: | --- |
Clone Of: | 1335470 | Environment: | |
Last Closed: | 2018-11-19 05:33:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1335470 | ||
Bug Blocks: | 1351530 |
Description
Ravishankar N
2016-10-21 05:48:44 UTC
I have recreated the issue again and placed all the client and server logs in the bug folder :- rhsqe-repo.lab.eng.blr.redhat.com:/var/www/html/sosreports/1335470/repro Gluster version:- [root@dhcp47-141 tmp]# gluster --version glusterfs 3.8.4 built on Oct 24 2016 11:13:47 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. RCA: The script creates 500MB files and fills up the 2 data bricks midway. Though the subsequent mkdir/creates fail on the mount with ENOTCONN, the arbiter brick still gets filled up with the dirs and 0-byte files. Thus the arbiter has more files and dirs than the data bricks. Also, since the "cd .." in the script gets ENOTCONN now (since fop succeeded only on arbiter) , files/dirs are created in a haphazard manner in arbiter (i.e. not following the same order as the script intended it to be). Now when brick2 is brought down and rm -rf is done from the mount, the readdir is served from brick1. The dentries present in b1 and b3 are deleted. But when rmdir comes on the parent, it fails on the arbiter brick with ENOTEMPY because it has extra files/dirs. Thus at the end of rm -rf, the dirs/files are still present in the arbiter. Karan, I think there is a typo in the BZ description. It should read 'Files not deleted from the arbiter brick..'. Could you confirm and change it? Based on the discussion http://post-office.corp.redhat.com/archives/gluster-storage-release-team/2016-November/msg00084.html resetting the flags and taking this BZ out of 3.2.0. *** Bug 1455034 has been marked as a duplicate of this bug. *** Given that this is a situation which can be hit only when data disks are full and there is no data-loss of any kind or falsely reporting success to the application for these entry operations,this bug is not a priority right now and is being closed. Entry FOP consistency will still be undertaken as a part of 1593242 and upstream github issue https://github.com/gluster/glusterfs/issues/502 |