Bug 1104121

Summary: Dist-geo-rep: In geo-rep mount-broker setup, status shows "Config Corrupted" when status is requested without master and slave url.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: Avra Sengupta <asengupt>
Status: CLOSED ERRATA QA Contact: Bhaskar Bandari <bbandari>
Severity: medium Docs Contact:
Priority: unspecified    
Version: rhgs-3.0CC: aavati, asengupt, avishwan, bbandari, csaba, david.macdonald, fharshav, nlevinki, nsathyan, sauchter, ssamanta
Target Milestone: ---   
Target Release: RHGS 3.0.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.6.0.25-1 Doc Type: Bug Fix
Doc Text:
While setting up mount-broker geo-replication if the the entire slave url is not provided, the status will show "Config Corrupted" Workaround: Provide the entire slave url while setting up mount-broker geo-replication.
Story Points: ---
Clone Of:
: 1104649 (view as bug list) Environment:
Last Closed: 2014-09-22 19:40:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1087818, 1104649    

Description Vijaykumar Koppad 2014-06-03 10:37:24 UTC
Description of problem: In geo-rep mount-broker setup, status shows "Config Corrupted" when status is requested without master and slave url.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# gluster v geo stat

MASTER NODE                MASTER VOL    MASTER BRICK                 SLAVE                  STATUS              CHECKPOINT STATUS    CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------------------------
redlake.blr.redhat.com     master        /bricks/brick1/master_b1     10.70.42.172::slave    Config Corrupted    N/A                  N/A
redlake.blr.redhat.com     master        /bricks/brick2/master_b5     10.70.42.172::slave    Config Corrupted    N/A                  N/A
redlake.blr.redhat.com     master        /bricks/brick3/master_b9     10.70.42.172::slave    Config Corrupted    N/A                  N/A
redeye.blr.redhat.com      master        /bricks/brick1/master_b4     10.70.42.208::slave    Config Corrupted    N/A                  N/A
redeye.blr.redhat.com      master        /bricks/brick2/master_b8     10.70.42.208::slave    Config Corrupted    N/A                  N/A
redeye.blr.redhat.com      master        /bricks/brick3/master_b12    10.70.42.208::slave    Config Corrupted    N/A                  N/A
redcloak.blr.redhat.com    master        /bricks/brick1/master_b2     10.70.42.240::slave    Config Corrupted    N/A                  N/A
redcloak.blr.redhat.com    master        /bricks/brick2/master_b6     10.70.42.240::slave    Config Corrupted    N/A                  N/A
redcloak.blr.redhat.com    master        /bricks/brick3/master_b10    10.70.42.240::slave    Config Corrupted    N/A                  N/A
redcell.blr.redhat.com     master        /bricks/brick1/master_b3     10.70.43.170::slave    Config Corrupted    N/A                  N/A
redcell.blr.redhat.com     master        /bricks/brick2/master_b7     10.70.43.170::slave    Config Corrupted    N/A                  N/A
redcell.blr.redhat.com     master        /bricks/brick3/master_b11    10.70.43.170::slave    Config Corrupted    N/A                  N/A
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Version-Release number of selected component (if applicable): glusterfs-3.6.0.10-1.el6rhs


How reproducible: Happens everytime. 


Steps to Reproduce:
1. create and start a geo-rep mount-broker setup, using the following steps,
2. Create a new group on the slave nodes. For example, geogroup
3. Create a unprivileged account on the slave nodes. For example, geoaccount. Make it a member of geogroup on all the slave nodes.
4. Create a new directory on all the slave nodes owned by root and with permissions 0711. Ensure that the location where this directory is created is writable only by root but geoaccount is able to access it. For example, create a mountbroker-root directory at /var/mountbroker-root.
5. Add the following options to the glusterd volfile on the slave nodes, (which you can find in /etc/glusterfs/glusterd.vol) assuming the name of the slave volume as slavevol:

    option mountbroker-root /var/mountbroker-root
    option mountbroker-geo-replication.geoaccount slavevol
    option geo-replication-log-group geogroup
    option rpc-auth-allow-insecure on
6. Restart glusterd on all the slave nodes.
Setup a passwdless ssh from one of the master node, to user on one of the slave node. For ex: to geoaccount
7. Create geo-rep relationship between master and slave to the user
for ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol create push-pem
8. In the slavenode which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with
user name as argument. Ex: # /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount
9. Start the geo-rep with slave user
Ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol start
10. run "gluster volume geo-rep status"


Actual results:   status shows "Config Corrupted" when status is requested without master and slave url.


Expected results: It should show proper status even when master and slave url is not used. 


Additional info:

Comment 2 Vijaykumar Koppad 2014-06-04 09:33:45 UTC
Looks like this affects stopping the volume after stopping the mount-broker geo-rep, 

# gluster v stop master
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: master: failed: geo-replication Unable to get the status of active geo-replication session for the volume 'master'.
Please check the log file for more info. Use 'force' option to ignore and stop the volume.

Comment 3 Kotresh HR 2014-06-12 11:51:20 UTC
*** Bug 1104129 has been marked as a duplicate of this bug. ***

Comment 4 Avra Sengupta 2014-06-13 06:13:11 UTC
Fix at https://code.engineering.redhat.com/gerrit/26825

Comment 7 Vijaykumar Koppad 2014-07-23 10:13:10 UTC
verified on the build glusterfs-3.6.0.25-1

Comment 11 errata-xmlrpc 2014-09-22 19:40:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html