+++ This bug was initially created as a clone of Bug #1324300 +++ Description of problem: ======================= hostname is not populating in the error message when remove-brick commit operation failed. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.9-1 How reproducible: ================== Always Steps to Reproduce: =================== 1. Create a two node cluster, create 2 *2 volume and start it 2. Fuse mount the volume and write enough data 3. start removing the one replica brick set // this will trigger the rebalance 4. Try to commit remove-brick operation on both the nodes in sequence and observe the error messages it's displaying on both the nodes. Actual results: =============== Hostname is missing in the error message Expected results: ================= Hostname should get populated in the error message when op fails. Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2016-04-06 00:35:53 EDT --- This bug is automatically being proposed for the current z-stream release of Red Hat Gluster Storage 3 by setting the release flag 'rhgs‑3.1.z' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Byreddy on 2016-04-06 00:38:40 EDT --- Console log: ============ [root@dhcp43-157 ~]# gluster volume status Status of volume: Dis-Rep Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.43.157:/bricks/brick0/brs0 49183 0 Y 25570 Brick 10.70.43.188:/bricks/brick0/brs1 49202 0 Y 6785 Brick 10.70.43.157:/bricks/brick1/brs2 49184 0 Y 25589 Brick 10.70.43.188:/bricks/brick1/brs3 49203 0 Y 6804 NFS Server on localhost 2049 0 Y 25611 Self-heal Daemon on localhost N/A N/A Y 25616 NFS Server on 10.70.43.188 2049 0 Y 6826 Self-heal Daemon on 10.70.43.188 N/A N/A Y 6831 Task Status of Volume Dis-Rep ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp43-157 ~]# [root@dhcp43-157 ~]# [root@dhcp43-157 ~]# gluster volume remove-brick Dis-Rep replica 2 10.70.43.157:/bricks/brick1/brs2 10.70.43.188:/bricks/brick1/brs3 start volume remove-brick start: success ID: 31ba351f-94e4-4120-b574-7183604bb6c5 [root@dhcp43-157 ~]# [root@dhcp43-157 ~]# [root@dhcp43-157 ~]# [root@dhcp43-157 ~]# gluster volume remove-brick Dis-Rep replica 2 10.70.43.157:/bricks/brick1/brs2 10.70.43.188:/bricks/brick1/brs3 status Node Rebalanced-files size scanned failures skipped status run time in h:m:s --------- ----------- ----------- ----------- ----------- ----------- ------------ -------------- localhost 22 307.9KB 310 0 0 in progress 0:0:9 10.70.43.188 0 0Bytes 0 0 0 in progress 0:0:8 [root@dhcp43-157 ~]# [root@dhcp43-157 ~]# gluster volume remove-brick Dis-Rep replica 2 10.70.43.157:/bricks/brick1/brs2 10.70.43.188:/bricks/brick1/brs3 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: use 'force' option as migration is in progress [root@dhcp43-157 ~]# On Other peer node: =================== [root@dhcp43-188 ~]# gluster volume remove-brick Dis-Rep replica 2 10.70.43.157:/bricks/brick1/brs2 10.70.43.188:/bricks/brick1/brs3 commit Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit: failed: Staging failed on dhcp43-157.lab.eng.blr.redhat.com. Error: use 'force' option as migration is in progress [root@dhcp43-188 ~]# --- Additional comment from Atin Mukherjee on 2016-04-06 01:13:56 EDT --- This doesn't impact any functionality apart from the non-uniformity of the error message reported back to the CLI. Hence setting both the severity and priority to low. We'll work on a upstream patch. Considering the severity can we move this bug to 3.2, Byreddy? --- Additional comment from Byreddy on 2016-04-07 00:19:23 EDT --- (In reply to Atin Mukherjee from comment #3) > This doesn't impact any functionality apart from the non-uniformity of the > error message reported back to the CLI. Hence setting both the severity and > priority to low. We'll work on a upstream patch. Considering the severity > can we move this bug to 3.2, Byreddy? This is not a regression and there is no any functionality loss here, only cli error message is incorrect so we can move this bug to 3.2
REVIEW: http://review.gluster.org/13923 (glusterd: populate hostname in error message) posted (#1) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/13923 (glusterd: populate hostname in error message) posted (#2) for review on master by Atin Mukherjee (amukherj)
This isn't a straight forward fix. Considering there's no functionality impact, we decided not to fix this at this moment looking at the other priorities.