Bug 1271659
Summary: | gluster v status --xml for a replicated hot tier volume | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | hari gowtham <hgowtham> |
Component: | tier | Assignee: | hari gowtham <hgowtham> |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | rcyriac, rhs-bugs, sankarshan, smohan, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged, ZStream |
Target Release: | RHGS 3.1.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.5-0.3 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 1268810 | Environment: | |
Last Closed: | 2016-03-01 05:38:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1268810 | ||
Bug Blocks: | 1260783, 1260923 |
Description
hari gowtham
2015-10-14 12:49:34 UTC
Gluster status --xml for tier vol is working; moving to verified [root@zod ~]# rpm -qa|grep gluster glusterfs-libs-3.7.5-5.el7rhgs.x86_64 glusterfs-fuse-3.7.5-5.el7rhgs.x86_64 glusterfs-3.7.5-5.el7rhgs.x86_64 glusterfs-server-3.7.5-5.el7rhgs.x86_64 glusterfs-client-xlators-3.7.5-5.el7rhgs.x86_64 glusterfs-cli-3.7.5-5.el7rhgs.x86_64 glusterfs-api-3.7.5-5.el7rhgs.x86_64 glusterfs-debuginfo-3.7.5-5.el7rhgs.x86_64 [root@zod ~]# gluster v status quota_one --xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> <volStatus> <volumes> <volume> <volName>quota_one</volName> <nodeCount>14</nodeCount> <hotBricks> <node> <hostname>yarrow</hostname> <path>/dummy/brick101/quota_one_hot</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49185</port> <ports> <tcp>49185</tcp> <rdma>N/A</rdma> </ports> <pid>18811</pid> </node> <node> <hostname>zod</hostname> <path>/dummy/brick101/quota_one_hot</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49185</port> <ports> <tcp>49185</tcp> <rdma>N/A</rdma> </ports> <pid>20257</pid> </node> <node> <hostname>yarrow</hostname> <path>/dummy/brick100/quota_one_hot</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49184</port> <ports> <tcp>49184</tcp> <rdma>N/A</rdma> </ports> <pid>18854</pid> </node> <node> <hostname>zod</hostname> <path>/dummy/brick100/quota_one_hot</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49184</port> <ports> <tcp>49184</tcp> <rdma>N/A</rdma> </ports> <pid>20275</pid> </node> </hotBricks> <coldBricks> <node> <hostname>zod</hostname> <path>/rhs/brick1/quota_one</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49182</port> <ports> <tcp>49182</tcp> <rdma>N/A</rdma> </ports> <pid>20293</pid> </node> <node> <hostname>yarrow</hostname> <path>/rhs/brick1/quota_one</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49182</port> <ports> <tcp>49182</tcp> <rdma>N/A</rdma> </ports> <pid>18883</pid> </node> <node> <hostname>zod</hostname> <path>/rhs/brick2/quota_one</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>49183</port> <ports> <tcp>49183</tcp> <rdma>N/A</rdma> </ports> <pid>20311</pid> </node> <node> <hostname>yarrow</hostname> <path>/rhs/brick2/quota_one</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>49183</port> <ports> <tcp>49183</tcp> <rdma>N/A</rdma> </ports> <pid>18901</pid> </node> <node> <hostname>NFS Server</hostname> <path>localhost</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>0</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>-1</pid> </node> <node> <hostname>Self-heal Daemon</hostname> <path>localhost</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>20347</pid> </node> <node> <hostname>Quota Daemon</hostname> <path>localhost</path> <peerid>ad002db4-bdc0-43e3-aae7-c209012140b0</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>20356</pid> </node> <node> <hostname>NFS Server</hostname> <path>10.70.34.43</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>0</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>-1</pid> </node> <node> <hostname>Self-heal Daemon</hostname> <path>10.70.34.43</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>19003</pid> </node> <node> <hostname>Quota Daemon</hostname> <path>10.70.34.43</path> <peerid>236f7068-8b99-4aa0-a0b5-40b76146cdf4</peerid> <status>1</status> <port>N/A</port> <ports> <tcp>N/A</tcp> <rdma>N/A</rdma> </ports> <pid>19012</pid> </node> </coldBricks> <tasks> <task> <type>Tier migration</type> <id>eae47ea7-aea5-4220-8f1d-c6cfc145875d</id> <status>1</status> <statusStr>in progress</statusStr> </task> </tasks> </volume> </volumes> </volStatus> </cliOutput> [root@zod ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0193.html |