Bug 1024228
| Summary: | adding host uuids to volume status command xml output | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Dusmant <dpati> |
| Component: | glusterfs | Assignee: | Bala.FA <barumuga> |
| Status: | CLOSED ERRATA | QA Contact: | Prasanth <pprakash> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 2.1 | CC: | barumuga, dpati, dtsang, gluster-bugs, kmayilsa, knarra, mmahoney, pprakash, psriniva, sabose, sankarshan, sdharane, ssampat, vbellur |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 2.1.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0.44.1u2rhs-1.el6rhs | Doc Type: | Bug Fix |
| Doc Text: |
Previously, the XML output of the volume status command did not contain host UUID adds to bricks and for services like NFS and SHD. Host UUIDs had to be manually found by looking into the output of the 'gluster peer status' command output and match that with the volume status output. With this fix, the respective host and brick UUIDs are added into the brick and NFS and SHD status xml output.
|
Story Points: | --- |
| Clone Of: | 955548 | Environment: | |
| Last Closed: | 2014-02-25 07:57:07 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 955548 | ||
| Bug Blocks: | |||
|
Description
Dusmant
2013-10-29 07:31:58 UTC
Without this fix, the filtering feature of RHSC is not going to work properly. Requirement is that nfs/shd services need to have uuid along with hostname. upstream patch is under review at http://review.gluster.org/6162 downstream patch is at https://code.engineering.redhat.com/gerrit/15759 Verified.
--------------------------
[root@vm10 ~]# gluster volume status repvol --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>115</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>repvol</volName>
<nodeCount>6</nodeCount>
<node>
<hostname>vm10.lab.eng.blr.redhat.com</hostname>
<path>/home/1</path>
<peerid>ea241638-9cff-43bd-a29a-7b1c2e446bb0</peerid>
<status>1</status>
<port>49153</port>
<pid>30110</pid>
</node>
<node>
<hostname>vm11.lab.eng.blr.redhat.com</hostname>
<path>/home/1</path>
<peerid>b553c447-a6ca-40c0-92f5-94b6e2cd1b6f</peerid>
<status>1</status>
<port>49153</port>
<pid>17754</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>ea241638-9cff-43bd-a29a-7b1c2e446bb0</peerid>
<status>1</status>
<port>2049</port>
<pid>30123</pid>
</node>
<node>
<hostname>Self-heal Daemon</hostname>
<path>localhost</path>
<peerid>ea241638-9cff-43bd-a29a-7b1c2e446bb0</peerid>
<status>1</status>
<port>N/A</port>
<pid>30131</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>vm11.lab.eng.blr.redhat.com</path>
<peerid>b553c447-a6ca-40c0-92f5-94b6e2cd1b6f</peerid>
<status>1</status>
<port>2049</port>
<pid>17766</pid>
</node>
<node>
<hostname>Self-heal Daemon</hostname>
<path>vm11.lab.eng.blr.redhat.com</path>
<peerid>b553c447-a6ca-40c0-92f5-94b6e2cd1b6f</peerid>
<status>1</status>
<port>N/A</port>
<pid>17773</pid>
</node>
<tasks/>
</volume>
</volumes>
</volStatus>
</cliOutput>
--------------------------
Bala, Can you please verify if the edited doc text is technically accurate? Doc text looks good to me Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |