Bug 1374565
| Summary: | [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume | |||
|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Kotresh HR <khiremat> | |
| Component: | bitrot | Assignee: | Kotresh HR <khiremat> | |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | ||
| Severity: | medium | Docs Contact: | bugs <bugs> | |
| Priority: | unspecified | |||
| Version: | 3.8 | CC: | amukherj, aspandey, bmohanra, bugs, khiremat, pkarampu, rcyriac, rhinduja, rhs-bugs, rmekala, sanandpa | |
| Target Milestone: | --- | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.8.4 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1374564 | |||
| : | 1374567 (view as bug list) | Environment: | ||
| Last Closed: | 2016-09-16 18:28:44 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1341934, 1373520, 1374564 | |||
| Bug Blocks: | 1374567 | |||
|
Description
Kotresh HR
2016-09-09 04:57:38 UTC
REVIEW: http://review.gluster.org/15433 (feature/bitrot: Fix recovery of corrupted hardlink) posted (#1) for review on release-3.8 by Kotresh HR (khiremat) COMMIT: http://review.gluster.org/15433 committed in release-3.8 by Raghavendra Bhat (raghavendra) ------ commit 22ea98a31f147bcd1e4643c2b77f503c63b03a4e Author: Kotresh HR <khiremat> Date: Tue Sep 6 18:28:42 2016 +0530 feature/bitrot: Fix recovery of corrupted hardlink Problem: When a file with hardlink is corrupted in ec volume, the recovery steps mentioned was not working. Only name and metadata was healing but not the data. Cause: The bad file marker in the inode context is not removed. Hence when self heal tries to open the file for data healing, it fails with EIO. Background: The bitrot deletes inode context during forget. Briefly, the recovery steps involves following steps. 1. Delete the entry marked with bad file xattr from backend. Delete all the hardlinks including .glusters hardlink as well. 2. Access the each hardlink of the file including original from the mount. The step 2 will send lookup to the brick where the files are deleted from backend and returns with ENOENT. On ENOENT, server xlator forgets the inode if there are no dentries associated with it. But in case hardlinks, the forget won't be called as dentries (other hardlink files) are associated with the inode. Hence bitrot stube won't delete it's context failing the data self heal. Fix: Bitrot-stub should delete the inode context on getting ENOENT during lookup. >Change-Id: Ice6adc18625799e7afd842ab33b3517c2be264c1 >BUG: 1373520 >Signed-off-by: Kotresh HR <khiremat> >Reviewed-on: http://review.gluster.org/15408 >Smoke: Gluster Build System <jenkins.org> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Raghavendra Bhat <raghavendra> (cherry picked from commit b86a7de9b5ea9dcd0a630dbe09fce6d9ad0d8944) Change-Id: Ice6adc18625799e7afd842ab33b3517c2be264c1 BUG: 1374565 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/15433 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra Bhat <raghavendra> All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.4, please open a new bug report. glusterfs-3.8.4 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/announce/2016-September/000060.html [2] https://www.gluster.org/pipermail/gluster-users/ |