Bug 1229257
Summary: | Incorrect vol info post detach on disperse volume | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | tier | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | medium | Docs Contact: | |
Priority: | urgent | ||
Version: | rhgs-3.1 | CC: | annair, asrivast, bugs, dlambrig, josferna, nchilaka, rhs-bugs, rkavunga, storage-qa-internal, trao, vagarwal |
Target Milestone: | --- | ||
Target Release: | RHGS 3.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1214387 | Environment: | |
Last Closed: | 2015-07-29 04:59:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1214387 | ||
Bug Blocks: | 1202842 |
Description
Nag Pavan Chilakam
2015-06-08 10:36:35 UTC
[root@rhsqa14-vm3 ~]# gluster v create ecvol disperse 6 redundancy 2 10.70.47.159:/rhs/brick1/e0 10.70.46.2:/rhs/brick1/e0 10.70.47.159:/rhs/brick2/e0 10.70.46.2:/rhs/brick2/e0 10.70.47.159:/rhs/brick3/e0 10.70.46.2:/rhs/brick3/e0 force volume create: ecvol: success: please start the volume to access data [root@rhsqa14-vm3 ~]# gluster v info Volume Name: ecvol Type: Disperse Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca Status: Created Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/e0 Brick2: 10.70.46.2:/rhs/brick1/e0 Brick3: 10.70.47.159:/rhs/brick2/e0 Brick4: 10.70.46.2:/rhs/brick2/e0 Brick5: 10.70.47.159:/rhs/brick3/e0 Brick6: 10.70.46.2:/rhs/brick3/e0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2 10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: success volume rebalance: ecvol: failed: Volume ecvol needs to be started to perform rebalance Failed to run tier start. Please execute tier start command explictly Usage : gluster volume rebalance <volname> tier start [root@rhsqa14-vm3 ~]# gluster v start ecvol volume start: ecvol: success [root@rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2 10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: failed: Volume ecvol is already a tier. [root@rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2 10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0 force Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: failed: Volume ecvol is already a tier. [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v info Volume Name: ecvol Type: Tier Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca Status: Started Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: 10.70.46.2:/rhs/brick4/e0 Brick2: 10.70.47.159:/rhs/brick4/e0 Cold Tier: Cold Tier Type : Disperse Number of Bricks: 1 x (4 + 2) = 6 Brick3: 10.70.47.159:/rhs/brick1/e0 Brick4: 10.70.46.2:/rhs/brick1/e0 Brick5: 10.70.47.159:/rhs/brick2/e0 Brick6: 10.70.46.2:/rhs/brick2/e0 Brick7: 10.70.47.159:/rhs/brick3/e0 Brick8: 10.70.46.2:/rhs/brick3/e0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# root@rhsqa14-vm3 ~]# gluster volume rebalance ecvol tier start volume rebalance: ecvol: success: Rebalance on ecvol has been started successfully. Use rebalance status command to check status of the rebalance process. ID: d72d6a3b-b9b2-479c-b961-0bf500a98588 [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster volume rebalance ecvol tier status Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 0 0 in progress 10.70.46.2 0 0 in progress volume rebalance: ecvol: success: [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v detach-tier ecvol start volume detach-tier start: success ID: 1a9a6afa-f81a-4372-b3d0-ccc43c874661 [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v detach-tier ecvol commit volume detach-tier commit: success Check the detached bricks to ensure all files are migrated. If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick. [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v info Volume Name: ecvol Type: Disperse Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/e0 Brick2: 10.70.46.2:/rhs/brick1/e0 Brick3: 10.70.47.159:/rhs/brick2/e0 Brick4: 10.70.46.2:/rhs/brick2/e0 Brick5: 10.70.47.159:/rhs/brick3/e0 Brick6: 10.70.46.2:/rhs/brick3/e0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# rpm -qa | grep gluster glusterfs-3.7.1-3.el6rhs.x86_64 glusterfs-cli-3.7.1-3.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64 glusterfs-libs-3.7.1-3.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64 glusterfs-fuse-3.7.1-3.el6rhs.x86_64 glusterfs-server-3.7.1-3.el6rhs.x86_64 glusterfs-rdma-3.7.1-3.el6rhs.x86_64 glusterfs-api-3.7.1-3.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64 [root@rhsqa14-vm3 ~]# NOTE: vol info shows properly after detach tier. but when IO was running on the vol and detach tier executed then IO failed and exited on mount point. we will put the status as verified but for IO failure will open a separate bug. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |