Bug 1335361 - Fix brick information output in heal info's xml output
Summary: Fix brick information output in heal info's xml output
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-12 04:48 UTC by Ravishankar N
Modified: 2016-05-12 05:06 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-05-12 05:06:25 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2016-05-12 04:48:51 UTC
Description of problem:
Reported by Ramesh Nachimuthu (rnachimu_AT_redhat.com)
When a brick is down, `gluster volume heal volname info` prints the name of the brick but `gluster volume heal volname info --xml does not.`

[09:54] <RameshN> [root@ovirt-node-1 vdsm]# gluster v heal v1 info  --xml
[09:54] <RameshN> <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
[09:54] <RameshN> <cliOutput>
[09:54] <RameshN>   <healInfo>
[09:54] <RameshN>     <bricks>
[09:54] <RameshN>       <brick hostUuid="(null)">
[09:54] <RameshN>         <name>information not available</name>
[09:54] <RameshN>         <status>Transport endpoint is not connected</status>
[09:54] <RameshN>         <numberOfEntries>-</numberOfEntries>
[09:54] <RameshN>       </brick>
[09:54] <RameshN>       <brick hostUuid="f0ef3d05-3ef3-411a-af2c-628b0a14278b">
[09:54] <RameshN>         <name>ovirt-node-1.test.com:/brick-2</name>
[09:54] <RameshN>         <status>Connected</status>
[09:54] <RameshN>         <numberOfEntries>0</numberOfEntries>
[09:54] <RameshN>       </brick>
[09:54] <RameshN>       <brick hostUuid="f0ef3d05-3ef3-411a-af2c-628b0a14278b">
[09:54] <RameshN>         <name>ovirt-node-1.test.com:/brick-3</name>
[09:54] <RameshN>         <status>Connected</status>
[09:55] <RameshN>         <numberOfEntries>0</numberOfEntries>
[09:55] <RameshN>       </brick>
[09:55] <RameshN>     </bricks>
[09:55] <RameshN>   </healInfo>
[09:55] <RameshN>   <opRet>0</opRet>
[09:55] <RameshN>   <opErrno>0</opErrno>
[09:55] <RameshN>   <opErrstr/>
[09:55] <RameshN> </cliOutput>
[09:55] <RameshN> [root@ovirt-node-1 vdsm]# gluster v heal v1 info
[09:55] <RameshN> Brick ovirt-node-1.test.com:/brick-1
[09:55] <RameshN> Status: Transport endpoint is not connected
[09:55] <RameshN> Number of entries: -
[09:55] <RameshN> Brick ovirt-node-1.test.com:/brick-2
[09:55] <RameshN> Status: Connected
[09:55] <RameshN> Number of entries: 0
[09:55] <RameshN> Brick ovirt-node-1.test.com:/brick-3
[09:55] <RameshN> Status: Connected
[09:55] <RameshN> Number of entries: 0
[09:55] <RameshN> [root@ovirt-node-1 vdsm]

Comment 1 Ravishankar N 2016-05-12 05:06:25 UTC
[10:23] <itisravi> RameshN: Is it okay that hostUUID is null when brick is down?
[10:24] <RameshN> I am ignoring the brick which is not up
[10:24] <itisravi> RameshN: because we can get that only when brick is down.
[10:24] <itisravi> RameshN: sorry I mean brick is up.
[10:24] <RameshN> why not brick name
[10:24] <RameshN> ok
[10:25] <itisravi> brick name we have locally.
[10:25] <itisravi> so we can print.
[10:26] <RameshN> ok. I need hostUuid to map. But anyway its not an issue, we have the brick list with hostuuid, and we check heal status for each brick, we can ignore if the brick is not up. Anyway we won't be able to get the entries 
[10:29] <itisravi> RameshN: umm so you want the brick name to be printed or not?
[10:30] <itisravi> RameshN: I just raised a bug :) https://bugzilla.redhat.com/show_bug.cgi?id=1335361
[10:30] <itisravi> wondering If I need to close it then.
[10:31] <RameshN> We can't use just brick name. If possible we need both hostUuid  and brick name, otherwise leave it. We can work with workaround :-)
[10:35] <itisravi> RameshN: ok, closing it then.
[10:35] <RameshN> ok

Closing it based on comments above.


Note You need to log in before you can comment on or make changes to this bug.