Bug 1283957
Summary: | Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
Component: | tier | Assignee: | hari gowtham <hgowtham> | |
Status: | CLOSED ERRATA | QA Contact: | krishnaram Karthick <kramdoss> | |
Severity: | low | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.1 | CC: | amukherj, asrivast, dlambrig, hgowtham, rhinduja, rhs-bugs, rkavunga, sankarshan, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.1.3 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.7.9-4 | Doc Type: | Bug Fix | |
Doc Text: |
If tiering is enabled for a volume, during volume restart the status of the tier daemon was incorrectly set to 'in progress' for all nodes. This meant that when status was requested for that volume, the tier daemon appeared to be running on all nodes, regardless of node type. A check has been added so that the tiering daemon only runs, and only appears to be running, on tiered volumes, so the status displayed for the volumes is now correct.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1315666 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-23 04:57:05 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1268895, 1299184, 1315666, 1316808, 1347509 |
Description
Nag Pavan Chilakam
2015-11-20 11:40:01 UTC
Correction inline : When volume status or volume tier status is requested for a tiered volume, the status of all nodes in the trusted storage pool is listed as in progress, even when a node is not part of the tiered volume. Reason: Tier daemon for every volume in the trusted storage pool runs on all the nodes of trusted storage pool and thus you see this. Issue is still seen with build - glusterfs-server-3.7.9-2.el7rhgs.x86_64 node 'dhcp-47-90' isn't part of the volume, but we still see tier migration is progress on the node. Moving the bug to assigned. sosreports shall be attached. [root@dhcp47-90 yum.repos.d]# gluster v status Status of volume: tier-test Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.47.9:/bricks/brick0/l1 49161 0 Y 28170 Cold Bricks: Brick 10.70.47.90:/bricks/brick0/l1 49162 0 Y 8523 Brick 10.70.47.105:/bricks/brick0/l1 49162 0 Y 32168 NFS Server on localhost 2049 0 Y 8543 NFS Server on 10.70.46.94 2049 0 Y 1937 NFS Server on 10.70.47.9 2049 0 Y 28190 NFS Server on 10.70.47.105 2049 0 Y 32188 Task Status of Volume tier-test ------------------------------------------------------------------------------ Task : Tier migration ID : d4654e28-88fa-40e7-965d-9525a2bbe67d Status : in progress [root@dhcp47-105 yum.repos.d]# gluster v status Status of volume: tier-test Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.47.9:/bricks/brick0/l1 49161 0 Y 28170 Cold Bricks: Brick 10.70.47.90:/bricks/brick0/l1 49162 0 Y 8523 Brick 10.70.47.105:/bricks/brick0/l1 49162 0 Y 32168 NFS Server on localhost 2049 0 Y 32188 NFS Server on 10.70.47.9 2049 0 Y 28190 NFS Server on 10.70.46.94 2049 0 Y 1937 NFS Server on dhcp47-90.lab.eng.blr.redhat. com 2049 0 Y 8543 Task Status of Volume tier-test ------------------------------------------------------------------------------ Task : Tier migration ID : d4654e28-88fa-40e7-965d-9525a2bbe67d Status : in progress [root@dhcp47-9 yum.repos.d]# gluster v status Status of volume: tier-test Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.47.9:/bricks/brick0/l1 49161 0 Y 28170 Cold Bricks: Brick 10.70.47.90:/bricks/brick0/l1 49162 0 Y 8523 Brick 10.70.47.105:/bricks/brick0/l1 49162 0 Y 32168 NFS Server on localhost 2049 0 Y 28190 NFS Server on dhcp47-90.lab.eng.blr.redhat. com 2049 0 Y 8543 NFS Server on 10.70.46.94 2049 0 Y 1937 NFS Server on 10.70.47.105 2049 0 Y 32188 Task Status of Volume tier-test ------------------------------------------------------------------------------ Task : Tier migration ID : d4654e28-88fa-40e7-965d-9525a2bbe67d Status : in progress [root@dhcp46-94 yum.repos.d]# gluster v status Status of volume: tier-test Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.47.9:/bricks/brick0/l1 49161 0 Y 28170 Cold Bricks: Brick 10.70.47.90:/bricks/brick0/l1 49162 0 Y 8523 Brick 10.70.47.105:/bricks/brick0/l1 49162 0 Y 32168 NFS Server on localhost 2049 0 Y 1937 NFS Server on dhcp47-90.lab.eng.blr.redhat. com 2049 0 Y 8543 NFS Server on 10.70.47.9 2049 0 Y 28190 NFS Server on 10.70.47.105 2049 0 Y 32188 Task Status of Volume tier-test ------------------------------------------------------------------------------ Task : Tier migration ID : d4654e28-88fa-40e7-965d-9525a2bbe67d Status : in progress [root@dhcp47-90 yum.repos.d]# gluster v tier tier-test status Node Promoted files Demoted files Status --------- --------- --------- --------- localhost 0 0 in progress 10.70.47.105 0 0 in progress 10.70.47.9 0 0 in progress 10.70.46.94 0 0 in progress Tiering Migration Functionality: tier-test: success The fix works fine for volumes created on a system which already has the fix. However, when a tiered volume is already present in a system and upgraded to the build which has the fix, tier status continues to show the status on all nodes. upstream master patch : http://review.gluster.org/#/c/14106/ patch on master : http://review.gluster.org/#/c/14106/ patch on 3.7 : http://review.gluster.org/#/c/14229/ patch on downstream : https://code.engineering.redhat.com/gerrit/#/c/73782/ Skipping the status belongs to another bug. this fix and that one don't pass the same code path. the above mentioned issue will be fixed on the bug https://bugzilla.redhat.com/show_bug.cgi?id=1322695. so this bug is moved back to ON_QA . 'detach tier status' and 'tier status' commands skip to update the status of nodes which are down. Fix for bz#1322695 would fix the issue in both these commands. Moving this bug to verified as the actual issue reported in this bz is addressed and verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |