Bug 1229242
Summary: | data tiering:force Remove brick is detaching-tier | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | tier | Assignee: | Mohammed Rafi KC <rkavunga> |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | rhgs-3.1 | CC: | bugs, gluster-bugs, hchiramm, josferna, rhs-bugs, rkavunga, storage-qa-internal, trao |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | RHGS 3.1.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1207238 | Environment: | |
Last Closed: | 2015-07-29 04:58:35 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1207238 | ||
Bug Blocks: | 1202842 |
Description
Nag Pavan Chilakam
2015-06-08 10:22:12 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1229242 ------------- data tiering:force Remove brick is detaching-tier [root@rhsqa14-vm3 ~]# gluster v create test 10.70.47.159:/rhs/brick1/t0 10.70.46.2:/rhs/brick1/t0 10.70.47.159:/rhs/brick2/t0 10.70.46.2:/rhs/brick2/t0 volume create: test: success: please start the volume to access data [root@rhsqa14-vm3 ~]# gluster v start test volume start: test: success [root@rhsqa14-vm3 ~]# gluster v info Volume Name: ecvol Type: Disperse Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/e0 Brick2: 10.70.46.2:/rhs/brick1/e0 Brick3: 10.70.47.159:/rhs/brick2/e0 Brick4: 10.70.46.2:/rhs/brick2/e0 Brick5: 10.70.47.159:/rhs/brick3/e0 Brick6: 10.70.46.2:/rhs/brick3/e0 Options Reconfigured: performance.readdir-ahead: on Volume Name: test Type: Distribute Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7 Status: Started Number of Bricks: 4 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/t0 Brick2: 10.70.46.2:/rhs/brick1/t0 Brick3: 10.70.47.159:/rhs/brick2/t0 Brick4: 10.70.46.2:/rhs/brick2/t0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v attach-tier test 10.70.47.159:/rhs/brick3/t0 10.70.46.2:/rhs/brick3/t0 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: success volume rebalance: test: success: Rebalance on test has been started successfully. Use rebalance status command to check status of the rebalance process. ID: af7dd4b2-b4b7-4d72-9e12-847e3c231eea [root@rhsqa14-vm3 ~]# gluster v info test Volume Name: test Type: Tier Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7 Status: Started Number of Bricks: 6 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 2 Brick1: 10.70.46.2:/rhs/brick3/t0 Brick2: 10.70.47.159:/rhs/brick3/t0 Cold Tier: Cold Tier Type : Distribute Number of Bricks: 4 Brick3: 10.70.47.159:/rhs/brick1/t0 Brick4: 10.70.46.2:/rhs/brick1/t0 Brick5: 10.70.47.159:/rhs/brick2/t0 Brick6: 10.70.46.2:/rhs/brick2/t0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 10.70.47.159:/rhs/brick3/t0 start volume remove-brick start: failed: Removing brick from a Tier volume is not allowed [root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 10.70.47.159:/rhs/brick3/t0 force Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: failed: Removing brick from a Tier volume is not allowed [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 10.70.47.159:/rhs/brick3/t0 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: Removing brick from a Tier volume is not allowed [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v info test Volume Name: test Type: Tier Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7 Status: Started Number of Bricks: 6 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 2 Brick1: 10.70.46.2:/rhs/brick3/t0 Brick2: 10.70.47.159:/rhs/brick3/t0 Cold Tier: Cold Tier Type : Distribute Number of Bricks: 4 Brick3: 10.70.47.159:/rhs/brick1/t0 Brick4: 10.70.46.2:/rhs/brick1/t0 Brick5: 10.70.47.159:/rhs/brick2/t0 Brick6: 10.70.46.2:/rhs/brick2/t0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# tried with single brick removal also : [root@rhsqa14-vm3 ~]# gluster v info test Volume Name: test Type: Tier Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7 Status: Started Number of Bricks: 6 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 2 Brick1: 10.70.46.2:/rhs/brick3/t0 Brick2: 10.70.47.159:/rhs/brick3/t0 Cold Tier: Cold Tier Type : Distribute Number of Bricks: 4 Brick3: 10.70.47.159:/rhs/brick1/t0 Brick4: 10.70.46.2:/rhs/brick1/t0 Brick5: 10.70.47.159:/rhs/brick2/t0 Brick6: 10.70.46.2:/rhs/brick2/t0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 start volume remove-brick start: failed: Removing brick from a Tier volume is not allowed [root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 force Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: failed: Removing brick from a Tier volume is not allowed [root@rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: Removing brick from a Tier volume is not allowed [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# rpm -qa | grep gluster glusterfs-3.7.1-3.el6rhs.x86_64 glusterfs-cli-3.7.1-3.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64 glusterfs-libs-3.7.1-3.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64 glusterfs-fuse-3.7.1-3.el6rhs.x86_64 glusterfs-server-3.7.1-3.el6rhs.x86_64 glusterfs-rdma-3.7.1-3.el6rhs.x86_64 glusterfs-api-3.7.1-3.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64 [root@rhsqa14-vm3 ~]# this bug is verified with IO tried to remove brick but not successful. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |