Bug 1104121 - Dist-geo-rep: In geo-rep mount-broker setup, status shows "Config Corrupted" when status is requested without master and slave url.
Summary: Dist-geo-rep: In geo-rep mount-broker setup, status shows "Config Corrupted" ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: RHGS 3.0.0
Assignee: Avra Sengupta
QA Contact: Bhaskar Bandari
URL:
Whiteboard:
: 1104129 (view as bug list)
Depends On:
Blocks: 1087818 1104649
TreeView+ depends on / blocked
 
Reported: 2014-06-03 10:37 UTC by Vijaykumar Koppad
Modified: 2018-12-09 17:55 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.6.0.25-1
Doc Type: Bug Fix
Doc Text:
While setting up mount-broker geo-replication if the the entire slave url is not provided, the status will show "Config Corrupted" Workaround: Provide the entire slave url while setting up mount-broker geo-replication.
Clone Of:
: 1104649 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:40:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1164213 0 None None None Never
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Vijaykumar Koppad 2014-06-03 10:37:24 UTC
Description of problem: In geo-rep mount-broker setup, status shows "Config Corrupted" when status is requested without master and slave url.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# gluster v geo stat

MASTER NODE                MASTER VOL    MASTER BRICK                 SLAVE                  STATUS              CHECKPOINT STATUS    CRAWL STATUS
---------------------------------------------------------------------------------------------------------------------------------------------------
redlake.blr.redhat.com     master        /bricks/brick1/master_b1     10.70.42.172::slave    Config Corrupted    N/A                  N/A
redlake.blr.redhat.com     master        /bricks/brick2/master_b5     10.70.42.172::slave    Config Corrupted    N/A                  N/A
redlake.blr.redhat.com     master        /bricks/brick3/master_b9     10.70.42.172::slave    Config Corrupted    N/A                  N/A
redeye.blr.redhat.com      master        /bricks/brick1/master_b4     10.70.42.208::slave    Config Corrupted    N/A                  N/A
redeye.blr.redhat.com      master        /bricks/brick2/master_b8     10.70.42.208::slave    Config Corrupted    N/A                  N/A
redeye.blr.redhat.com      master        /bricks/brick3/master_b12    10.70.42.208::slave    Config Corrupted    N/A                  N/A
redcloak.blr.redhat.com    master        /bricks/brick1/master_b2     10.70.42.240::slave    Config Corrupted    N/A                  N/A
redcloak.blr.redhat.com    master        /bricks/brick2/master_b6     10.70.42.240::slave    Config Corrupted    N/A                  N/A
redcloak.blr.redhat.com    master        /bricks/brick3/master_b10    10.70.42.240::slave    Config Corrupted    N/A                  N/A
redcell.blr.redhat.com     master        /bricks/brick1/master_b3     10.70.43.170::slave    Config Corrupted    N/A                  N/A
redcell.blr.redhat.com     master        /bricks/brick2/master_b7     10.70.43.170::slave    Config Corrupted    N/A                  N/A
redcell.blr.redhat.com     master        /bricks/brick3/master_b11    10.70.43.170::slave    Config Corrupted    N/A                  N/A
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Version-Release number of selected component (if applicable): glusterfs-3.6.0.10-1.el6rhs


How reproducible: Happens everytime. 


Steps to Reproduce:
1. create and start a geo-rep mount-broker setup, using the following steps,
2. Create a new group on the slave nodes. For example, geogroup
3. Create a unprivileged account on the slave nodes. For example, geoaccount. Make it a member of geogroup on all the slave nodes.
4. Create a new directory on all the slave nodes owned by root and with permissions 0711. Ensure that the location where this directory is created is writable only by root but geoaccount is able to access it. For example, create a mountbroker-root directory at /var/mountbroker-root.
5. Add the following options to the glusterd volfile on the slave nodes, (which you can find in /etc/glusterfs/glusterd.vol) assuming the name of the slave volume as slavevol:

    option mountbroker-root /var/mountbroker-root
    option mountbroker-geo-replication.geoaccount slavevol
    option geo-replication-log-group geogroup
    option rpc-auth-allow-insecure on
6. Restart glusterd on all the slave nodes.
Setup a passwdless ssh from one of the master node, to user on one of the slave node. For ex: to geoaccount
7. Create geo-rep relationship between master and slave to the user
for ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol create push-pem
8. In the slavenode which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with
user name as argument. Ex: # /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount
9. Start the geo-rep with slave user
Ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol start
10. run "gluster volume geo-rep status"


Actual results:   status shows "Config Corrupted" when status is requested without master and slave url.


Expected results: It should show proper status even when master and slave url is not used. 


Additional info:

Comment 2 Vijaykumar Koppad 2014-06-04 09:33:45 UTC
Looks like this affects stopping the volume after stopping the mount-broker geo-rep, 

# gluster v stop master
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: master: failed: geo-replication Unable to get the status of active geo-replication session for the volume 'master'.
Please check the log file for more info. Use 'force' option to ignore and stop the volume.

Comment 3 Kotresh HR 2014-06-12 11:51:20 UTC
*** Bug 1104129 has been marked as a duplicate of this bug. ***

Comment 4 Avra Sengupta 2014-06-13 06:13:11 UTC
Fix at https://code.engineering.redhat.com/gerrit/26825

Comment 7 Vijaykumar Koppad 2014-07-23 10:13:10 UTC
verified on the build glusterfs-3.6.0.25-1

Comment 11 errata-xmlrpc 2014-09-22 19:40:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.