Bug 1229251
| Summary: | Data Tiering; Need to change volume info details like type of volume and number of bricks when tier is attached to a EC(disperse) volume | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
| Component: | tier | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
| Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | medium | Docs Contact: | |
| Priority: | urgent | ||
| Version: | rhgs-3.1 | CC: | annair, asrivast, bugs, josferna, rhs-bugs, rkavunga, storage-qa-internal, trao, vagarwal |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | RHGS 3.1.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1212019 | Environment: | |
| Last Closed: | 2015-07-29 04:58:50 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1212019 | ||
| Bug Blocks: | 1186580, 1202842 | ||
|
Description
Nag Pavan Chilakam
2015-06-08 10:31:44 UTC
This bug has been verified with both types of EC volumes such as pure disperse and distributed-disperse volume. this is with plane disperse vol. [root@rhsqa14-vm3 ~]# gluster v create ec1 disperse-data 2 redundancy 1 10.70.47.159:/rhs/brick1/ec1 10.70.46.2:/rhs/brick1/ec1 10.70.47.159:/rhs/brick2/ec1 force volume create: ec1: success: please start the volume to access data [root@rhsqa14-vm3 ~]# gluster v start ec1 volume start: ec1: success [root@rhsqa14-vm3 ~]# gluster v info Volume Name: ec1 Type: Disperse Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8 Status: Started Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/ec1 Brick2: 10.70.46.2:/rhs/brick1/ec1 Brick3: 10.70.47.159:/rhs/brick2/ec1 Options Reconfigured: performance.readdir-ahead: on Volume Name: ecvol Type: Disperse Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/e0 Brick2: 10.70.46.2:/rhs/brick1/e0 Brick3: 10.70.47.159:/rhs/brick2/e0 Brick4: 10.70.46.2:/rhs/brick2/e0 Brick5: 10.70.47.159:/rhs/brick3/e0 Brick6: 10.70.46.2:/rhs/brick3/e0 Options Reconfigured: performance.readdir-ahead: on Volume Name: test Type: Tier Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7 Status: Started Number of Bricks: 6 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 2 Brick1: 10.70.46.2:/rhs/brick3/t0 Brick2: 10.70.47.159:/rhs/brick3/t0 Cold Tier: Cold Tier Type : Distribute Number of Bricks: 4 Brick3: 10.70.47.159:/rhs/brick1/t0 Brick4: 10.70.46.2:/rhs/brick1/t0 Brick5: 10.70.47.159:/rhs/brick2/t0 Brick6: 10.70.46.2:/rhs/brick2/t0 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# gluster v info ec1 Volume Name: ec1 Type: Disperse Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8 Status: Started Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/ec1 Brick2: 10.70.46.2:/rhs/brick1/ec1 Brick3: 10.70.47.159:/rhs/brick2/ec1 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v ec1 status 'unrecognized word: ec1 (position 1) [root@rhsqa14-vm3 ~]# gluster v status ec1 Status of volume: ec1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.47.159:/rhs/brick1/ec1 49159 0 Y 27228 Brick 10.70.46.2:/rhs/brick1/ec1 49159 0 Y 16170 Brick 10.70.47.159:/rhs/brick2/ec1 49160 0 Y 27246 NFS Server on localhost 2049 0 Y 27265 Self-heal Daemon on localhost N/A N/A Y 27273 NFS Server on 10.70.46.2 2049 0 Y 16189 Self-heal Daemon on 10.70.46.2 N/A N/A Y 16197 Task Status of Volume ec1 ------------------------------------------------------------------------------ There are no active volume tasks [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v attach-tier ec1 10.70.47.159:/rhs/brick4/ec1 10.70.46.2:/rhs/brick4/ec1 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: success volume rebalance: ec1: success: Rebalance on ec1 has been started successfully. Use rebalance status command to check status of the rebalance process. ID: b25fb766-f9bf-4df2-a1ff-3d43c2f4faa5 [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v info ec1 Volume Name: ec1 Type: Tier Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8 Status: Started Number of Bricks: 5 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 2 Brick1: 10.70.46.2:/rhs/brick4/ec1 Brick2: 10.70.47.159:/rhs/brick4/ec1 Cold Tier: Cold Tier Type : Disperse Number of Bricks: 1 x (2 + 1) = 3 Brick3: 10.70.47.159:/rhs/brick1/ec1 Brick4: 10.70.46.2:/rhs/brick1/ec1 Brick5: 10.70.47.159:/rhs/brick2/ec1 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# ========================================================= this is with distributed-disperse vol. [root@rhsqa14-vm3 ~]# gluster v create ec2 disperse-data 4 redundancy 2 10.70.47.159:/rhs/brick1/ec2 10.70.46.2:/rhs/brick1/ec2 10.70.47.159:/rhs/brick2/ec2 10.70.46.2:/rhs/brick2/ec2 10.70.47.159:/rhs/brick3/ec2 10.70.46.2:/rhs/brick3/ec2 force volume create: ec2: success: please start the volume to access data [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v start ec2 volume start: ec2: success [root@rhsqa14-vm3 ~]# gluster v info ec2 Volume Name: ec2 Type: Disperse Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70 Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/ec2 Brick2: 10.70.46.2:/rhs/brick1/ec2 Brick3: 10.70.47.159:/rhs/brick2/ec2 Brick4: 10.70.46.2:/rhs/brick2/ec2 Brick5: 10.70.47.159:/rhs/brick3/ec2 Brick6: 10.70.46.2:/rhs/brick3/ec2 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v add-brick ec2 10.70.47.159:/rhs/brick4/ec2 10.70.46.2:/rhs/brick4/ec2 10.70.47.159:/rhs/brick5/ec2 10.70.46.2:/rhs/brick5/ec2 10.70.47.159:/rhs/brick6/ec2 10.70.46.2:/rhs/brick6/ec2 volume add-brick: success [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v info ec2 Volume Name: ec2 Type: Distributed-Disperse Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70 Status: Started Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: 10.70.47.159:/rhs/brick1/ec2 Brick2: 10.70.46.2:/rhs/brick1/ec2 Brick3: 10.70.47.159:/rhs/brick2/ec2 Brick4: 10.70.46.2:/rhs/brick2/ec2 Brick5: 10.70.47.159:/rhs/brick3/ec2 Brick6: 10.70.46.2:/rhs/brick3/ec2 Brick7: 10.70.47.159:/rhs/brick4/ec2 Brick8: 10.70.46.2:/rhs/brick4/ec2 Brick9: 10.70.47.159:/rhs/brick5/ec2 Brick10: 10.70.46.2:/rhs/brick5/ec2 Brick11: 10.70.47.159:/rhs/brick6/ec2 Brick12: 10.70.46.2:/rhs/brick6/ec2 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# gluster v attach-tier ec2 10.70.47.159:/rhs/brick6/ec2_0 10.70.46.2:/rhs/brick6/ec2_0 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: success volume rebalance: ec2: success: Rebalance on ec2 has been started successfully. Use rebalance status command to check status of the rebalance process. ID: bb666fe1-475c-45a8-8256-b2a6ff9bffc6 [root@rhsqa14-vm3 ~]# gluster v info ec2 Volume Name: ec2 Type: Tier Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70 Status: Started Number of Bricks: 14 Transport-type: tcp Hot Tier : Hot Tier Type : Distribute Number of Bricks: 2 Brick1: 10.70.46.2:/rhs/brick6/ec2_0 Brick2: 10.70.47.159:/rhs/brick6/ec2_0 Cold Tier: Cold Tier Type : Distributed-Disperse Number of Bricks: 2 x (4 + 2) = 12 Brick3: 10.70.47.159:/rhs/brick1/ec2 Brick4: 10.70.46.2:/rhs/brick1/ec2 Brick5: 10.70.47.159:/rhs/brick2/ec2 Brick6: 10.70.46.2:/rhs/brick2/ec2 Brick7: 10.70.47.159:/rhs/brick3/ec2 Brick8: 10.70.46.2:/rhs/brick3/ec2 Brick9: 10.70.47.159:/rhs/brick4/ec2 Brick10: 10.70.46.2:/rhs/brick4/ec2 Brick11: 10.70.47.159:/rhs/brick5/ec2 Brick12: 10.70.46.2:/rhs/brick5/ec2 Brick13: 10.70.47.159:/rhs/brick6/ec2 Brick14: 10.70.46.2:/rhs/brick6/ec2 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm3 ~]# [root@rhsqa14-vm3 ~]# rpm -qa | grep gluster glusterfs-3.7.1-3.el6rhs.x86_64 glusterfs-cli-3.7.1-3.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64 glusterfs-libs-3.7.1-3.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64 glusterfs-fuse-3.7.1-3.el6rhs.x86_64 glusterfs-server-3.7.1-3.el6rhs.x86_64 glusterfs-rdma-3.7.1-3.el6rhs.x86_64 glusterfs-api-3.7.1-3.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64 [root@rhsqa14-vm3 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |