Bug 1032558
Summary: | Remove-brick with self-heal causes data loss | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | shylesh <shmohan> | |
Component: | glusterfs | Assignee: | Pranith Kumar K <pkarampu> | |
Status: | CLOSED ERRATA | QA Contact: | shylesh <shmohan> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 2.1 | CC: | pkarampu, psriniva, spalai, vagarwal, vbellur | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 2.1.2 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.4.0.50rhs | Doc Type: | Bug Fix | |
Doc Text: |
Previously, when one of the bricks in a replica pair was offline, a few files were not migrated from the decommissioned bricks resulting in some files missing. With this fix, data is completely migrated even when one of the bricks in the replica pair is offline.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1032927 (view as bug list) | Environment: | ||
Last Closed: | 2014-02-25 08:05:10 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1032927 |
Description
shylesh
2013-11-20 11:55:33 UTC
*** Bug 1031971 has been marked as a duplicate of this bug. *** I was able to reproduce this bug in Bigbend. [root@localhost mnt]# glusterfs --version glusterfs 3.4.0.33rhs built on Sep 8 2013 13:20:25 [root@localhost mnt]# gluster volume info Volume Name: test Type: Replicate Volume ID: d1166e18-a761-4ce3-8ef6-5a5ccfcd79ef Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.42.190:/brick2/test_brick1 Brick2: 10.70.43.118:/brick2/test_brick1 Brick3: 10.70.42.190:/brick2/test_brick2 Brick4: 10.70.43.118:/brick2/test_brick2 Options Reconfigured: diagnostics.client-log-level: TRACE [root@localhost mnt]# gluster volume remove-brick test 10.70.42.190:/brick2/test_brick2 10.70.43.118:/brick2/test_brick2 start I rebooted node 10.70.43.118 after remove-brick operation started. After remove-brick commit I found some of the files on the removed brick. Results of ls -R on removed brick 10.70.43.118/brick2/test_brick2 : ./3/5/2/2: 0 1 2 3 4 5 file.1 file.2 file.3 file.4 Here is the result of "ls ./3/5/2/2" on mount point. [root@localhost mnt]# ls ./3/5/2/2/ 0 1 2 3 4 5 file.0 file.5 ( file 1 to 4 are missing) Verified on 3.4.0.52rhs-1.el6rhs.x86_64 Please review the text for technical accuracy. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |