Description of problem: In geo-rep mount-broker setup, status shows "Config Corrupted" when status is requested without master and slave url. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> # gluster v geo stat MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS --------------------------------------------------------------------------------------------------------------------------------------------------- redlake.blr.redhat.com master /bricks/brick1/master_b1 10.70.42.172::slave Config Corrupted N/A N/A redlake.blr.redhat.com master /bricks/brick2/master_b5 10.70.42.172::slave Config Corrupted N/A N/A redlake.blr.redhat.com master /bricks/brick3/master_b9 10.70.42.172::slave Config Corrupted N/A N/A redeye.blr.redhat.com master /bricks/brick1/master_b4 10.70.42.208::slave Config Corrupted N/A N/A redeye.blr.redhat.com master /bricks/brick2/master_b8 10.70.42.208::slave Config Corrupted N/A N/A redeye.blr.redhat.com master /bricks/brick3/master_b12 10.70.42.208::slave Config Corrupted N/A N/A redcloak.blr.redhat.com master /bricks/brick1/master_b2 10.70.42.240::slave Config Corrupted N/A N/A redcloak.blr.redhat.com master /bricks/brick2/master_b6 10.70.42.240::slave Config Corrupted N/A N/A redcloak.blr.redhat.com master /bricks/brick3/master_b10 10.70.42.240::slave Config Corrupted N/A N/A redcell.blr.redhat.com master /bricks/brick1/master_b3 10.70.43.170::slave Config Corrupted N/A N/A redcell.blr.redhat.com master /bricks/brick2/master_b7 10.70.43.170::slave Config Corrupted N/A N/A redcell.blr.redhat.com master /bricks/brick3/master_b11 10.70.43.170::slave Config Corrupted N/A N/A >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Version-Release number of selected component (if applicable): glusterfs-3.6.0.10-1.el6rhs How reproducible: Happens everytime. Steps to Reproduce: 1. create and start a geo-rep mount-broker setup, using the following steps, 2. Create a new group on the slave nodes. For example, geogroup 3. Create a unprivileged account on the slave nodes. For example, geoaccount. Make it a member of geogroup on all the slave nodes. 4. Create a new directory on all the slave nodes owned by root and with permissions 0711. Ensure that the location where this directory is created is writable only by root but geoaccount is able to access it. For example, create a mountbroker-root directory at /var/mountbroker-root. 5. Add the following options to the glusterd volfile on the slave nodes, (which you can find in /etc/glusterfs/glusterd.vol) assuming the name of the slave volume as slavevol: option mountbroker-root /var/mountbroker-root option mountbroker-geo-replication.geoaccount slavevol option geo-replication-log-group geogroup option rpc-auth-allow-insecure on 6. Restart glusterd on all the slave nodes. Setup a passwdless ssh from one of the master node, to user on one of the slave node. For ex: to geoaccount 7. Create geo-rep relationship between master and slave to the user for ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol create push-pem 8. In the slavenode which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with user name as argument. Ex: # /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount 9. Start the geo-rep with slave user Ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol start 10. run "gluster volume geo-rep status" Actual results: status shows "Config Corrupted" when status is requested without master and slave url. Expected results: It should show proper status even when master and slave url is not used. Additional info:
Looks like this affects stopping the volume after stopping the mount-broker geo-rep, # gluster v stop master Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: master: failed: geo-replication Unable to get the status of active geo-replication session for the volume 'master'. Please check the log file for more info. Use 'force' option to ignore and stop the volume.
*** Bug 1104129 has been marked as a duplicate of this bug. ***
Fix at https://code.engineering.redhat.com/gerrit/26825
verified on the build glusterfs-3.6.0.25-1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html