+++ This bug was initially created as a clone of Bug #1366222 +++ +++ This bug was initially created as a clone of Bug #1366128 +++ Description of problem: ====================== When the bricks are offline, if we want to get the heal info xml output for all the bricks , the offline brick names are not available. Instead we have the message 'information not available' for the 'name' tag.However The "heal info" command output displays the brick names even for the offline brick. It would be good to have the same info in the xml output as well. Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.7.9-10.el7rhgs.x86_64 How reproducible: ================== 1/1 Steps to Reproduce: ===================== 1. Create a replicated volume. Start the volume. Create mount. Create files from mount 2. bring down a brick. Modify the files. 3. Execute "heal <volname> info --xml" Actual results: ================ Offline bricks names are not shown in the xml output Expected results: ================== bricks names to be shown in xml output as well. --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-08-11 02:41:28 EDT --- This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from on 2016-08-11 02:42:00 EDT --- Output of 'heal info --xml' command: ==================================== <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput> <healInfo> <bricks> <brick hostUuid="1af55777-086f-4dd7-bd2f-54981eeab596"> <name>rhsauto030.lab.eng.blr.redhat.com:/bricks/brick0/hosdu_brick0</name> <file gfid="2fc97d4b-05c5-417d-84e7-392455938af6">/file1</file> <file gfid="00000000-0000-0000-0000-000000000001">/</file> <file gfid="e4c3c22d-4c21-402a-8101-bf4a0f30021a">/file2</file> <file gfid="d893cdd2-c37f-4b11-93aa-cff1c35d724e">/file3</file> <file gfid="a507c02f-3e68-4568-bf49-accc4ca57d36">/file4</file> <file gfid="c8fd2784-d20b-45d1-9a79-bfd0c2ea7d6a">/file5</file> <file gfid="232449a1-b55b-4cd7-8964-7294b8f058dc">/file6</file> <file gfid="146732dd-aed2-45ea-9da1-3046ae131046">/file7</file> <file gfid="4f54483a-9f8e-41ee-b9dc-71f4bb54ff2c">/file8</file> <file gfid="a35e17c6-899a-4dee-8c69-2d8db47994c9">/file9</file> <file gfid="ce8b6800-b491-4d90-963a-fd0e2e0d740d">/file10</file> <status>Connected</status> <numberOfEntries>11</numberOfEntries> </brick> <brick hostUuid="-"> <name>information not available</name> <status>Transport endpoint is not connected</status> <numberOfEntries>-</numberOfEntries> </brick> </bricks> </healInfo> <opRet>0</opRet> <opErrno>0</opErrno> <opErrstr/> </cliOutput> Output of 'heal info' command: ================================ Brick rhsauto030.lab.eng.blr.redhat.com:/bricks/brick0/hosdu_brick0 /file1 / /file2 /file3 /file4 /file5 /file6 /file7 /file8 /file9 /file10 Status: Connected Number of entries: 11 Brick rhsauto031.lab.eng.blr.redhat.com:/bricks/brick0/hosdu_brick1 Status: Transport endpoint is not connected Number of entries: - --- Additional comment from Vijay Bellur on 2016-08-11 06:15:37 EDT --- REVIEW: http://review.gluster.org/15146 (glfsheal: print brick name and path even when brick is down) posted (#1) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-08-12 02:25:33 EDT --- COMMIT: http://review.gluster.org/15146 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 5ef32c57f327e1dd4e9d227b9c8fd4b6f6fb4970 Author: Ravishankar N <ravishankar> Date: Thu Aug 11 10:10:25 2016 +0000 glfsheal: print brick name and path even when brick is down The xml variant of heal info command does not display brick name when the brick is down due to a failure to fetch the hostUUID. But the non xml variant does. So fixed the xml variant to print the remote_host and remote_subvol even when the brick is down. Change-Id: I16347eb4455b9bcc7a9b0127f8783140b6016578 BUG: 1366222 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/15146 Reviewed-by: Anuradha Talur <atalur> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
REVIEW: http://review.gluster.org/15156 (glfsheal: print brick name and path even when brick is down) posted (#1) for review on release-3.8 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/15156 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit a027be1229eb68ce1bbf8b4092673c80bf4a18fc Author: Ravishankar N <ravishankar> Date: Thu Aug 11 10:10:25 2016 +0000 glfsheal: print brick name and path even when brick is down Backport of http://review.gluster.org/#/c/15146/ The xml variant of heal info command does not display brick name when the brick is down due to a failure to fetch the hostUUID. But the non xml variant does. So fixed the xml variant to print the remote_host and remote_subvol even when the brick is down. Change-Id: I16347eb4455b9bcc7a9b0127f8783140b6016578 BUG: 1366489 Signed-off-by: Ravishankar N <ravishankar> (cherry picked from commit 5ef32c57f327e1dd4e9d227b9c8fd4b6f6fb4970) Reviewed-on: http://review.gluster.org/15156 Reviewed-by: Anuradha Talur <atalur> NetBSD-regression: NetBSD Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.3, please open a new bug report. glusterfs-3.8.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/announce/2016-August/000059.html [2] https://www.gluster.org/pipermail/gluster-users/