Bug 1268810
Summary: | gluster v status --xml for a replicated hot tier volume | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | hari gowtham <hgowtham> | |
Component: | tiering | Assignee: | hari gowtham <hgowtham> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | bugs <bugs> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | bugs, nchilaka, smohan | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1271659 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-16 13:39:34 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1271659 |
Description
hari gowtham
2015-10-05 10:50:39 UTC
Gluster status --xml for tier vol is working; moving to verified [root@zod ~]# rpm -qa|grep gluster glusterfs-libs-3.7.5-5.el7rhgs.x86_64 glusterfs-fuse-3.7.5-5.el7rhgs.x86_64 glusterfs-3.7.5-5.el7rhgs.x86_64 glusterfs-server-3.7.5-5.el7rhgs.x86_64 glusterfs-client-xlators-3.7.5-5.el7rhgs.x86_64 glusterfs-cli-3.7.5-5.el7rhgs.x86_64 glusterfs-api-3.7.5-5.el7rhgs.x86_64 glusterfs-debuginfo-3.7.5-5.el7rhgs.x86_64 [root@zod ~]# gluster v status quota_one --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>quota_one</volName> <nodeCount>14</nodeCount> <hotBricks> <node> <hostname>yarrow</hostname> <path>/dummy/brick101/quota_one_hot</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49185</port> <ports> <tcp>49185</tcp> <rdma>N/A</rdma> </ports> <pid>18811</pid> </node> <node> <hostname>zod</hostname> <path>/dummy/brick101/quota_one_hot</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49185</port> <ports> <tcp>49185</tcp> <rdma>N/A</rdma> </ports> <pid>20257</pid> </node> <node> <hostname>yarrow</hostname> <path>/dummy/brick100/quota_one_hot</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49184</port> <ports> <tcp>49184</tcp> <rdma>N/A</rdma> </ports> <pid>18854</pid> </node> <node> <hostname>zod</hostname> <path>/dummy/brick100/quota_one_hot</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49184</port> <ports> <tcp>49184</tcp> <rdma>N/A</rdma> </ports> <pid>20275</pid> </node> </hotBricks> <coldBricks> <node> <hostname>zod</hostname> <path>/rhs/brick1/quota_one</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49182</port> <ports> <tcp>49182</tcp> <rdma>N/A</rdma> </ports> <pid>20293</pid> </node> <node> <hostname>yarrow</hostname> <path>/rhs/brick1/quota_one</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49182</port> <ports> <tcp>49182</tcp> <rdma>N/A</rdma> </ports> <pid>18883</pid> </node> <node> <hostname>zod</hostname> <path>/rhs/brick2/quota_one</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49183</port> <ports> <tcp>49183</tcp> <rdma>N/A</rdma> </ports> <pid>20311</pid> </node> <node> <hostname>yarrow</hostname> <path>/rhs/brick2/quota_one</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49183</port> <ports> <tcp>49183</tcp> <rdma>N/A</rdma> </ports> <pid>18901</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>0</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>-1</pid> </node> <node> <hostname>Self-heal Daemon</hostname> <path>localhost</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>20347</pid> </node> <node> <hostname>Quota Daemon</hostname> <path>localhost</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>20356</pid> </node> <node> <hostname>NFS Server</hostname> <path>10.70.34.43</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>0</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>-1</pid> </node> <node> <hostname>Self-heal Daemon</hostname> <path>10.70.34.43</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>19003</pid> </node> <node> <hostname>Quota Daemon</hostname> <path>10.70.34.43</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>19012</pid> </node> </coldBricks> <tasks> <task> <type>Tier migration</type> <id>eae47ea7-aea5-4220-8f1d-c6cfc145875d</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> [root@zod ~]# This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |