Bug 1245202
Summary: | Glusterfs-afr: Remove brick process ends up with split-brain issue along with failures in rebalance. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Triveni Rao <trao> |
Component: | replicate | Assignee: | Ravishankar N <ravishankar> |
Status: | CLOSED WORKSFORME | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | rhgs-3.1 | CC: | anepatel, asriram, mlawrenc, mzywusko, nbalacha, nchilaka, ravishankar, rhs-bugs, rmekala, sanandpa, sankarshan, sashinde, sasundar, spalai, storage-qa-internal, tdesala |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | AFR | ||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
When rebalance is run as a part of remove-brick command, some files may be reported as split-brain and, therefore, not migrated, even if the files are not split-brain.
Workaround:
Manually copy the files that did not migrate from the bricks into the Gluster volume via the mount.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2016-06-15 09:19:44 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1243542 | ||
Bug Blocks: | 1216951 |
Description
Triveni Rao
2015-07-21 13:15:12 UTC
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1245202/sosreport-casino-vm1.lab.eng.blr.redhat.com.001-20150721022924.tar sosreport uploaded I have seen this issue already with RHEV-RHGS integration, where while remove-brick operation lead to Split-brain issue, which in-turn cause application VMs to go to PAUSED state. I have filed a bug - https://bugzilla.redhat.com/show_bug.cgi?id=1243542 - for this issue But this issue is not accepted as blocker, as there is no real data loss moving the component to AFR as it is reported as a split brain issue. similar to bz#1244197this seems to be more of a dht-rebalance case, Prasad can you check if this problem happens and update with latest results Re-tried this remove-brick, dht-rebalance scenario on a dist-replicate volume (4X3) with the recent build # rpm -qa | grep gluster glusterfs-client-xlators-3.12.2-31.el7rhgs.x86_64 glusterfs-debuginfo-3.12.2-31.el7rhgs.x86_64 glusterfs-cli-3.12.2-31.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.3.x86_64 glusterfs-libs-3.12.2-31.el7rhgs.x86_64 glusterfs-api-3.12.2-31.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-31.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch Rebalance was successfully completed and no split-brain was observed, Issue is no longer seen, |