| Summary: | Directories still present in the bricks after removing from mount point. | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Karan Sandha <ksandha> | ||||
| Component: | arbiter | Assignee: | Ravishankar N <ravishankar> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Karan Sandha <ksandha> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | rhgs-3.2 | CC: | amukherj, bmohanra, pkarampu, ravishankar, rhinduja, rhs-bugs, storage-qa-internal | ||||
| Target Milestone: | --- | Keywords: | ZStream | ||||
| Target Release: | --- | ||||||
| Hardware: | All | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Known Issue | |||||
| Doc Text: |
If some of the bricks of a replica or arbiter sub volume go down or get disconnected from the client while performing 'rm -rf', the directories may re-appear on the back end when the bricks come up and self-heal is over. When the user again tries to create a directory with the same name from the mount, it may heal this existing directory into other DHT subvols of the volume.
Workaround: If the deletion from the mount did not complain but the bricks still contain the directories, the directory and its associated gfid symlink must be removed from the back end. If the directory contains files, they (file + its gfid hardlink) would need to be removed too.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-11-19 05:49:27 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1351530 | ||||||
| Attachments: |
|
||||||
|
Description
Karan Sandha
2016-10-24 11:33:52 UTC
After testing few scenarios ,post the talk, it might impact the directory structure while creating it with the same name as present in the bricks. Hence moving this back to 3.2 release. Thanks & regards Karan Sandha Was able to figure out the issue with Karan and Pranith's help with simpler steps:
1. Create a 3 node distributed arbiter vol- 2 x (2+1) config (i.e. bricks 1 to 6)
2. mkdir explorer{1..10000} on fuse client
3. start rm -rvf explorer*
While 3. is going on:
4. Kill brick1. The rm -rvf still continues
5. Kill brick2. The rm -rvf fails for the current dirents of '/' with EROFS due to loss of quorum on replicate-0
6. Start force the volume
7. self-heal is triggered.
8. let rm -rvf of step 3 come to completion.
9. ls on mount shows some entries
10. do a second rm -rvf on mount
11. ls now shows no entries.
11. Once heal completes, the directories are still present in bricks of replicate-0.
RCA:
What is happening is when bricks are brought down during rmdir, AFR sets dirty xattr on the bricks that are up. Later when self-heal happens in step-7, if dirty xattr is set, it triggers a conservative merge on the parent dir, re-creating the entries from brick1 into bricks 2 and 3.
This BZ is another manifestation of BZ 1127178 where files re-appear due to conservative merge.
Edited the doc text slightly for the Release Notes. |