Bug 1271725
| Summary: | Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc) | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Mohammed Rafi KC <rkavunga> |
| Component: | tier | Assignee: | Mohammed Rafi KC <rkavunga> |
| Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | rhgs-3.1 | CC: | asrivast, dlambrig, nchilaka, rhs-bugs, rkavunga, sankarshan, sbhaloth, storage-qa-internal |
| Target Milestone: | --- | Keywords: | Triaged, ZStream |
| Target Release: | RHGS 3.1.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.7.5-0.3 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1258833 | Environment: | |
| Last Closed: | 2016-03-01 05:39:27 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1258833 | ||
| Bug Blocks: | 1260783, 1260923, 1261819 | ||
|
Description
Mohammed Rafi KC
2015-10-14 14:37:20 UTC
Following steps used to verify the BZ:
1.created a distribute volume with 3 bricks
2.execute remove brick to remove one of the bricks .
3.Now without commiting the remove brick, go ahead and attach tier
Expected result:
Attach tier should should not be allowed if rebalance/remove brick is in progress.
Actual result:
Attach tier failed when remove-brick is not committed/rebalance is in progress.
Volume Name: distribute
Type: Distribute
Volume ID: 8d918c68-893d-4173-8a09-1e0baef90b7f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.111:/rhs/brick3/b1
Brick2: 10.70.46.90:/rhs/brick3/b2
Brick3: 10.70.46.136:/bricks/brick3/b3
Options Reconfigured:
performance.readdir-ahead: on
[root@localhost ~]# gluster vol remove-brick distribute 10.70.46.136:/bricks/brick3/b3 start
volume remove-brick start: success
ID: ea216473-7dba-4439-94cf-734f13891f55
[root@localhost ~]# gluster vol remove-brick distribute 10.70.46.136:/bricks/brick3/b3 status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
10.70.46.136 0 0Bytes 0 0 0 completed 0.00
[root@localhost ~]# gluster vol attach-tier distribute 10.70.46.111:/rhs/brick4/hot1 10.70.46.90:/rhs/brick4/hot2
volume attach-tier: failed: An earlier remove-brick task exists for volume distribute. Either commit it or stop it before attaching a tier.
Tier command failed
[root@localhost ~]# gluster vol remove-brick distribute 10.70.46.136:/bricks/brick3/b3 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.
[root@localhost ~]# gluster vol attach-tier distribute 10.70.46.111:/rhs/brick4/hot1 10.70.46.90:/rhs/brick4/hot2
volume attach-tier: success
Tiering Migration Functionality: distribute: success: Attach tier is successful on distribute. use tier status to check the status.
ID: abd37426-ea0f-49c4-b502-833568162eb5
marking the BZ as verified on build :
rpm -qa | grep glusterfs
glusterfs-3.7.5-7.el7rhgs.x86_64
glusterfs-api-3.7.5-7.el7rhgs.x86_64
glusterfs-server-3.7.5-7.el7rhgs.x86_64
glusterfs-rdma-3.7.5-7.el7rhgs.x86_64
glusterfs-fuse-3.7.5-7.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-7.el7rhgs.x86_64
samba-vfs-glusterfs-4.2.4-6.el7rhgs.x86_64
glusterfs-libs-3.7.5-7.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-7.el7rhgs.x86_64
glusterfs-cli-3.7.5-7.el7rhgs.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html |