+++ This bug was initially created as a clone of Bug #1104112 +++ Description of problem: In geo-rep mount-broker setup, the status doesn't show, to which user geo-rep relationship is established on slave. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> # gluster v geo master geoaccount.43.170::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS --------------------------------------------------------------------------------------------------------------------------------------------- redlake.blr.redhat.com master /bricks/brick1/master_b1 10.70.42.172::slave Active N/A Changelog Crawl redlake.blr.redhat.com master /bricks/brick2/master_b5 10.70.42.172::slave Active N/A Changelog Crawl redlake.blr.redhat.com master /bricks/brick3/master_b9 10.70.42.172::slave Active N/A Changelog Crawl redeye.blr.redhat.com master /bricks/brick1/master_b4 10.70.42.208::slave Passive N/A N/A redeye.blr.redhat.com master /bricks/brick2/master_b8 10.70.42.208::slave Passive N/A N/A redeye.blr.redhat.com master /bricks/brick3/master_b12 10.70.42.208::slave Passive N/A N/A redcell.blr.redhat.com master /bricks/brick1/master_b3 10.70.43.170::slave Active N/A Changelog Crawl redcell.blr.redhat.com master /bricks/brick2/master_b7 10.70.43.170::slave Active N/A Changelog Crawl redcell.blr.redhat.com master /bricks/brick3/master_b11 10.70.43.170::slave Active N/A Changelog Crawl redcloak.blr.redhat.com master /bricks/brick1/master_b2 10.70.42.240::slave Passive N/A N/A redcloak.blr.redhat.com master /bricks/brick2/master_b6 10.70.42.240::slave Passive N/A N/A redcloak.blr.redhat.com master /bricks/brick3/master_b10 10.70.42.240::slave Passive N/A N/A >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> In the above example, the geo-rep relationship is established to the user geoaccount on slave, but it doesn't show it in status. Version-Release number of selected component (if applicable): glusterfs-3.6.0.10-1.el6rhs How reproducible: Happens everytime Steps to Reproduce: 1. create and start a geo-rep mount-broker setup, using the following steps, 2. Create a new group on the slave nodes. For example, geogroup 3. Create a unprivileged account on the slave nodes. For example, geoaccount. Make it a member of geogroup on all the slave nodes. 4. Create a new directory on all the slave nodes owned by root and with permissions 0711. Ensure that the location where this directory is created is writable only by root but geoaccount is able to access it. For example, create a mountbroker-root directory at /var/mountbroker-root. 5. Add the following options to the glusterd volfile on the slave nodes, (which you can find in /etc/glusterfs/glusterd.vol) assuming the name of the slave volume as slavevol: option mountbroker-root /var/mountbroker-root option mountbroker-geo-replication.geoaccount slavevol option geo-replication-log-group geogroup option rpc-auth-allow-insecure on 6. Restart glusterd on all the slave nodes. Setup a passwdless ssh from one of the master node, to user on one of the slave node. For ex: to geoaccount 7. Create geo-rep relationship between master and slave to the user for ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol create push-pem 8. In the slavenode which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with user name as argument. Ex: # /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount 9. Start the geo-rep with slave user Ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol start 10. check the status "gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol status" Actual results: Status doesn't show to which user the geo-rep has established the setups Expected results: Status should show to which user it has connected. Additional info:
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#2) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#3) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#4) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#163) for review on master by Vijay Bellur (vbellur)
COMMIT: http://review.gluster.org/9409 committed in master by Vijay Bellur (vbellur) ------ commit 52765ad94f233e4b4e57515e49f5b7c9e7c08b43 Author: Aravinda VK <avishwan> Date: Wed Jan 7 21:56:58 2015 +0530 geo-rep: Adding Slave user field to georep status New column introduced in Status output, "SLAVE USER", Slave user is not "root" in non root Geo-replication setup. Added additional tag in XML output <slave_user> BUG: 1180459 Change-Id: Ia48a5a8eb892ce883b9ec114be7bb2d46eff8535 Signed-off-by: Aravinda VK <avishwan> Reviewed-on: http://review.gluster.org/9409 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kotresh HR <khiremat> Reviewed-by: Avra Sengupta <asengupt> Reviewed-by: Venky Shankar <vshankar> Tested-by: Venky Shankar <vshankar> Reviewed-by: Vijay Bellur <vbellur> Tested-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user