Bug 1073616
Summary: | Distributed volume rebalance errors due to hardlinks to .glusterfs/... | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Jeff Byers <jbyers> |
Component: | distribute | Assignee: | bugs <bugs> |
Status: | CLOSED EOL | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.4.2 | CC: | bugs, gluster-bugs, jbyers, nsathyan |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-10-07 13:49:43 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jeff Byers
2014-03-06 19:44:25 UTC
Case 1: ------- If rebalance is triggered by executing "gluster volume rebalance volume-name start", then if files have hard links (excluding one link file under .glusterfs) then file will not be migrated. Case 2: ------- But if brick is removed, then even if file exists with hard-links, files will be migrated. In the case 1: Any new file created under gluster-volume will contain only one hard-link to actual file under .glusterfs. And hard-link of file under .glusterfs will present for all types of volumes.(distributed and replicate-distribute volumes). And rebalance should not be blocked as long as there is just one link present inside .glusterfs directory. But in your system, under .glusterfs there are two hard links. Link-1: -------- 60078978 /exports/nas-segment-0002/U5-4/.glusterfs/fd/9f/fd9f0601-3c28-49ad-86c4-569d4a6b63a0 Link-2: -------- 60078978 /exports/nas-segment-0002/U5-4/.glusterfs/ef/8e/ef8e9aae-5075-44aa-9344-d616971af197 Link-2 is legitimate one, because gfid of file and hard-link name is same. Presence of Link-1 is suspicious. Will update bugzilla, If I can come up with cases which can lead to such situation. If you can provide, what operations were performed at the mount or any script which ran at the mount point, it will be helpful. GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed. GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug. GlusterFS 3.4.x has reached end-of-life.\ \ If this bug still exists in a later release please reopen this and change the version or open a new bug. |