+++ This bug was initially created as a clone of Bug #1272318 +++ Description of problem: All the common services like Quota, NFS of tiered volume is tagged inside the cold bricks tag. It would be good if the tier volume gluster volume status xml output has the same tag structure as that of non tier volume xml output. Version-Release number of selected component (if applicable): glusterfs-3.7.5-0.18 How reproducible: Always Steps to Reproduce: 1. Create tier volume 2. Run gluster volume status --xml Actual results: [root@node31 upstream]# gluster volume status tiervol --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>tiervol</volName> <nodeCount>10</nodeCount> <hotBricks> <node> <hostname>10.70.46.140</hostname> <path>/bricks/brick2/tiervol_tier1</path> <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid> <status>1</status> <port>49163</port> <ports> <tcp>49163</tcp> <rdma>N/A</rdma> </ports> <pid>19078</pid> </node> <node> <hostname>10.70.46.174</hostname> <path>/bricks/brick2/tiervol_tier0</path> <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid> <status>1</status> <port>49167</port> <ports> <tcp>49167</tcp> <rdma>N/A</rdma> </ports> <pid>23585</pid> </node> </hotBricks> <coldBricks> <node> <hostname>10.70.46.174</hostname> <path>/bricks/brick2/tiervol_brick0</path> <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid> <status>1</status> <port>49165</port> <ports> <tcp>49165</tcp> <rdma>N/A</rdma> </ports> <pid>23400</pid> </node> <node> <hostname>10.70.46.140</hostname> <path>/bricks/brick1/tiervol_brick1</path> <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid> <status>1</status> <port>49161</port> <ports> <tcp>49161</tcp> <rdma>N/A</rdma> </ports> <pid>18928</pid> </node> <node> <hostname>10.70.46.174</hostname> <path>/bricks/brick3/tiervol_brick2</path> <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid> <status>1</status> <port>49166</port> <ports> <tcp>49166</tcp> <rdma>N/A</rdma> </ports> <pid>23418</pid> </node> <node> <hostname>10.70.46.140</hostname> <path>/bricks/brick2/tiervol_brick3</path> <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid> <status>1</status> <port>49162</port> <ports> <tcp>49162</tcp> <rdma>N/A</rdma> </ports> <pid>18946</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>23605</pid> </node> <node> <hostname>Quota Daemon</hostname> <path>localhost</path> <peerid>9ff4f219-ae5f-49b8-9255-b6e986053d8d</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>23811</pid> </node> <node> <hostname>NFS Server</hostname> <path>10.70.46.140</path> <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>19099</pid> </node> <node> <hostname>Quota Daemon</hostname> <path>10.70.46.140</path> <peerid>8e684b4a-9be7-45e8-8a19-506ec7184eb6</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>19260</pid> </node> </coldBricks> <tasks> <task> <type>Tier migration</type> <id>ebdb671e-8371-4507-8be2-96c5db0a49ba</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> Expected results: Additional info:
upstream patch : http://review.gluster.org/#/c/13101/ 3.7 patch : http://review.gluster.org/#/c/13757/
Targeting this BZ for 3.2.0.
Upstream mainline : http://review.gluster.org/13101 Upstream 3.8 : Available as part of branching from mainline And the fix is available in rhgs-3.2.0 as part of rebase to GlusterFS 3.8.4.
Created attachment 1209155 [details] gluster-vol-status.xml
Verified the fix in build - glusterfs-server-3.8.4-2 hot bricks, cold bricks and process tags are kept separate. Marking the bug as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html