Description of problem: Tiering related information is not displayed in gluster volume status xml output. It would be good if the information is displayed in xml output for the automation purpose. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a volume. 2. Attach tier bricks. 3. Execute "gluster volume status --xml" Actual results: Tiering related information is not displayed in gluster volume status xml output Expected results: Tiering related information should be displayed in gluster volume status xml output Additional info: [root@node31 ~]# gluster volume status Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick 10.70.46.51:/bricks/brick0/testvol_ti er1 49159 0 Y 6272 Brick 10.70.47.76:/bricks/brick1/testvol_ti er0 49168 0 Y 20069 Cold Bricks: Brick 10.70.47.76:/bricks/brick0/testvol_br ick0 49167 0 Y 19975 NFS Server on localhost 2049 0 Y 20090 NFS Server on 10.70.46.51 2049 0 Y 6293 Task Status of Volume testvol ------------------------------------------------------------------------------ Task : Rebalance ID : bc9c2ca3-0d8e-4096-8fbb-25c61323218b Status : in progress [root@node31 ~]# gluster volume status --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr>(null)</opErrstr> <volStatus> <volumes> <volume> <volName>testvol</volName> <nodeCount>5</nodeCount> <node> <hostname>10.70.46.51</hostname> <path>/bricks/brick0/testvol_tier1</path> <peerid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</peerid> <status>1</status> <port>49159</port> <ports> <tcp>49159</tcp> <rdma>N/A</rdma> </ports> <pid>6272</pid> </node> <node> <hostname>10.70.47.76</hostname> <path>/bricks/brick1/testvol_tier0</path> <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid> <status>1</status> <port>49168</port> <ports> <tcp>49168</tcp> <rdma>N/A</rdma> </ports> <pid>20069</pid> </node> <node> <hostname>10.70.47.76</hostname> <path>/bricks/brick0/testvol_brick0</path> <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid> <status>1</status> <port>49167</port> <ports> <tcp>49167</tcp> <rdma>N/A</rdma> </ports> <pid>19975</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>261b213b-a9f6-4fb6-8313-11e7eba47258</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>20090</pid> </node> <node> <hostname>NFS Server</hostname> <path>10.70.46.51</path> <peerid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</peerid> <status>1</status> <port>2049</port> <ports> <tcp>2049</tcp> <rdma>N/A</rdma> </ports> <pid>6293</pid> </node> <tasks> <task> <type>Rebalance</type> <id>bc9c2ca3-0d8e-4096-8fbb-25c61323218b</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> [root@node31 ~]#
Hi Dan, We need this fixed with the highest priority to help us continue with automation. Else our automation may be blocked
*** This bug has been marked as a duplicate of bug 1258338 ***
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report. glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report. glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user