Bug 1012296

Summary: host UUID xml tag is required in rebalance/remove-brick status xml output
Product: [Community] GlusterFS Reporter: Bala.FA <barumuga>
Component: cliAssignee: Bala.FA <barumuga>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: avishwan, barumuga, dpati, gluster-bugs, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1010975 Environment:
Last Closed: 2014-04-17 11:48:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1010975    
Bug Blocks:    

Comment 1 Anand Avati 2013-09-26 09:18:13 UTC
REVIEW: http://review.gluster.org/6005 (cli: add node uuid in rebalance and remove brick status xml output) posted (#1) for review on master by Bala FA (barumuga)

Comment 2 Anand Avati 2013-09-30 08:33:57 UTC
REVIEW: http://review.gluster.org/6005 (cli: add node uuid in rebalance and remove brick status xml output) posted (#2) for review on master by Bala FA (barumuga)

Comment 3 Anand Avati 2013-10-03 11:16:57 UTC
REVIEW: http://review.gluster.org/6032 (cli: add node uuid in rebalance and remove brick status xml output) posted (#1) for review on release-3.4 by Bala FA (barumuga)

Comment 4 Anand Avati 2013-10-04 05:26:05 UTC
COMMIT: http://review.gluster.org/6005 committed in master by Anand Avati (avati) 
------
commit d9db4a8ff300012eee87f31d73e303862d2de9b6
Author: Bala.FA <barumuga>
Date:   Thu Sep 26 08:09:35 2013 +0530

    cli: add node uuid in rebalance and remove brick status xml output
    
    This patch adds node uuid in rebalance/remove-brick status xml output.
    Output XML will look like
    
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <cliOutput>
      <opRet>0</opRet>
      <opErrno>0</opErrno>
      <opErrstr/>
      <volRebalance>
        <op>3</op>
        <nodeCount>1</nodeCount>
        <node>
          <nodeName>localhost</nodeName>
     ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
          <files>0</files>
          <size>0</size>
          <lookups>0</lookups>
          <failures>0</failures>
          <status>3</status>
          <statusStr>completed</statusStr>
        </node>
        <aggregate>
          <files>0</files>
          <size>0</size>
          <lookups>0</lookups>
          <failures>0</failures>
          <status>3</status>
          <statusStr>completed</statusStr>
        </aggregate>
      </volRebalance>
    </cliOutput>
    
    Change-Id: I5a1d4f9043b33b9e88150647a243ddb16154e843
    BUG: 1012296
    Signed-off-by: Bala.FA <barumuga>
    Reviewed-on: http://review.gluster.org/6005
    Reviewed-by: Kaushal M <kaushal>
    Tested-by: Gluster Build System <jenkins.com>

Comment 5 Anand Avati 2013-10-24 20:25:44 UTC
COMMIT: http://review.gluster.org/6032 committed in release-3.4 by Vijay Bellur (vbellur) 
------
commit 437d51f42813299435c297e9c0a1312dcaf0a6f4
Author: Bala.FA <barumuga>
Date:   Thu Sep 26 08:09:35 2013 +0530

    cli: add node uuid in rebalance and remove brick status xml output
    
    This patch adds node uuid in rebalance/remove-brick status xml output.
    Output XML will look like
    
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <cliOutput>
      <opRet>0</opRet>
      <opErrno>0</opErrno>
      <opErrstr/>
      <volRebalance>
        <op>3</op>
        <nodeCount>1</nodeCount>
        <node>
          <nodeName>localhost</nodeName>
     ==>> <id>883626f8-4d29-4d02-8c5d-c9f48c5b2445</id>
          <files>0</files>
          <size>0</size>
          <lookups>0</lookups>
          <failures>0</failures>
          <status>3</status>
          <statusStr>completed</statusStr>
        </node>
        <aggregate>
          <files>0</files>
          <size>0</size>
          <lookups>0</lookups>
          <failures>0</failures>
          <status>3</status>
          <statusStr>completed</statusStr>
        </aggregate>
      </volRebalance>
    </cliOutput>
    
    Change-Id: Ie2eb6e8d024605326d1a710b7c40ee30139f0f22
    BUG: 1012296
    Signed-off-by: Bala.FA <barumuga>
    Reviewed-on: http://review.gluster.org/6032
    Reviewed-by: Kaushal M <kaushal>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 6 Anand Avati 2013-11-26 19:51:41 UTC
COMMIT: http://review.gluster.org/6331 committed in release-3.4 by Anand Avati (avati) 
------
commit e412da34e927737efae711740191c59749214e9a
Author: Bala.FA <barumuga>
Date:   Thu Nov 21 17:16:39 2013 +0530

    cli: use proper copy to set node-name
    
    Previously node-name is set to point to node-uuid which could cause
    memory leak.  This is fixed by having memory copy of node-uuid.
    
    BUG: 1012296
    Change-Id: I4a7123771e2d8c31c5db4f78d022a9f4fbfc2667
    Signed-off-by: Bala.FA <barumuga>
    Reviewed-on: http://review.gluster.org/6331
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 7 Anand Avati 2013-11-26 19:55:56 UTC
COMMIT: http://review.gluster.org/6330 committed in master by Anand Avati (avati) 
------
commit 8690388bc7b3fe92c5dfc43a7173d5f05137e9cd
Author: Bala.FA <barumuga>
Date:   Thu Nov 21 17:16:39 2013 +0530

    cli: use proper copy to set node-name
    
    Previously node-name is set to point to node-uuid which could cause
    memory leak.  This is fixed by having memory copy of node-uuid.
    
    BUG: 1012296
    Change-Id: I3b638ec289d5b167c6e752ef1ba41f41efacb9da
    Signed-off-by: Bala.FA <barumuga>
    Reviewed-on: http://review.gluster.org/6330
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 8 Niels de Vos 2014-04-17 11:48:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user