Bug 1008826
Summary: | [RFE] Dist-geo-rep : remove-brick commit(for brick(s) on master volume) should kill geo-rep worker process for the bricks getting removed. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rachana Patel <racpatel> |
Component: | geo-replication | Assignee: | Kotresh HR <khiremat> |
Status: | CLOSED ERRATA | QA Contact: | storage-qa-internal <storage-qa-internal> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | aavati, avishwan, csaba, mzywusko, rhinduja |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | RHGS 3.1.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | consistency | ||
Fixed In Version: | glusterfs-3.7.0-2.el6rhs | Doc Type: | Enhancement |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-07-29 04:28:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1202842, 1223636 |
Description
Rachana Patel
2013-09-17 07:06:46 UTC
verified with the build: glusterfs-3.7.0-2.el6rhs.x86_64 As mentioned in comment 3, the steps for remove bricks are changed. Commit is not allowed if the geo-rep session is active. Correct error message is shown. In order to perform a commit, one must need to stop the geo-rep session, which kills all the geo-rep process. Hence the original issue reported in this bug will not be seen. Moving the bug to verified state. [root@georep1 scripts]# gluster volume remove-brick master 10.70.46.96:/rhs/brick2/b2 10.70.46.97:/rhs/brick2/b2 10.70.46.93:/rhs/brick2/b2 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: geo-replication sessions are active for the volume master. Stop geo-replication sessions involved in this volume. Use 'volume geo-replication status' command for more info. [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave stop Stopping geo-replication session between master & 10.70.46.154::slave has been successful [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status detail MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Stopped N/A N/A N/A N/A N/A N/A N/A N/A 2015-05-27 20:43:15 georep1 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Stopped N/A N/A N/A N/A N/A N/A N/A N/A 2015-05-27 20:44:17 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Stopped N/A N/A N/A N/A N/A N/A N/A N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Stopped N/A N/A N/A N/A N/A N/A N/A N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Stopped N/A N/A N/A N/A N/A N/A N/A N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Stopped N/A N/A N/A N/A N/A N/A N/A N/A N/A [root@georep1 scripts]# gluster volume remove-brick master 10.70.46.96:/rhs/brick2/b2 10.70.46.97:/rhs/brick2/b2 10.70.46.93:/rhs/brick2/b2 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: success Check the removed bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@georep1 scripts]# [root@georep1 scripts]# ps auxw | grep master | grep feedback | grep /rhs/brick [root@georep1 scripts]# ps -eaf | grep gsync Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |