Bug 1180459 - Dist-geo-rep : In geo-rep mount-broker setup, the status doesn't show, to which user geo-rep relationship is established on slave.
Summary: Dist-geo-rep : In geo-rep mount-broker setup, the status doesn't show, to whi...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
Assignee: Aravinda VK
QA Contact:
URL:
Whiteboard:
Depends On: 1104112
Blocks: 1087818
TreeView+ depends on / blocked
 
Reported: 2015-01-09 09:35 UTC by Aravinda VK
Modified: 2015-05-14 17:35 UTC (History)
14 users (show)

Fixed In Version: glusterfs-3.7.0beta1
Clone Of: 1104112
Environment:
Last Closed: 2015-05-14 17:26:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-01-09 09:35:37 UTC
+++ This bug was initially created as a clone of Bug #1104112 +++

Description of problem:  In geo-rep mount-broker setup, the status doesn't show, to which user geo-rep relationship is established on slave.

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
# gluster v geo master geoaccount.43.170::slave status
 
MASTER NODE                MASTER VOL    MASTER BRICK                 SLAVE                  STATUS     CHECKPOINT STATUS    CRAWL STATUS           
---------------------------------------------------------------------------------------------------------------------------------------------
redlake.blr.redhat.com     master        /bricks/brick1/master_b1     10.70.42.172::slave    Active     N/A                  Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick2/master_b5     10.70.42.172::slave    Active     N/A                  Changelog Crawl        
redlake.blr.redhat.com     master        /bricks/brick3/master_b9     10.70.42.172::slave    Active     N/A                  Changelog Crawl        
redeye.blr.redhat.com      master        /bricks/brick1/master_b4     10.70.42.208::slave    Passive    N/A                  N/A                    
redeye.blr.redhat.com      master        /bricks/brick2/master_b8     10.70.42.208::slave    Passive    N/A                  N/A                    
redeye.blr.redhat.com      master        /bricks/brick3/master_b12    10.70.42.208::slave    Passive    N/A                  N/A                    
redcell.blr.redhat.com     master        /bricks/brick1/master_b3     10.70.43.170::slave    Active     N/A                  Changelog Crawl        
redcell.blr.redhat.com     master        /bricks/brick2/master_b7     10.70.43.170::slave    Active     N/A                  Changelog Crawl        
redcell.blr.redhat.com     master        /bricks/brick3/master_b11    10.70.43.170::slave    Active     N/A                  Changelog Crawl        
redcloak.blr.redhat.com    master        /bricks/brick1/master_b2     10.70.42.240::slave    Passive    N/A                  N/A                    
redcloak.blr.redhat.com    master        /bricks/brick2/master_b6     10.70.42.240::slave    Passive    N/A                  N/A                    
redcloak.blr.redhat.com    master        /bricks/brick3/master_b10    10.70.42.240::slave    Passive    N/A                  N/A          
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

In the above example, the geo-rep relationship is established to the user geoaccount on slave, but it doesn't show it in status. 


Version-Release number of selected component (if applicable): glusterfs-3.6.0.10-1.el6rhs
 

How reproducible: Happens everytime


Steps to Reproduce:
1. create and start a geo-rep mount-broker setup, using the following steps,
2. Create a new group on the slave nodes. For example, geogroup
3. Create a unprivileged account on the slave nodes. For example, geoaccount. Make it a member of geogroup on all the slave nodes.
4. Create a new directory on all the slave nodes owned by root and with permissions 0711. Ensure that the location where this directory is created is writable only by root but geoaccount is able to access it. For example, create a mountbroker-root directory at /var/mountbroker-root.
5. Add the following options to the glusterd volfile on the slave nodes, (which you can find in /etc/glusterfs/glusterd.vol) assuming the name of the slave volume as slavevol:

    option mountbroker-root /var/mountbroker-root
    option mountbroker-geo-replication.geoaccount slavevol
    option geo-replication-log-group geogroup
    option rpc-auth-allow-insecure on
6. Restart glusterd on all the slave nodes.
Setup a passwdless ssh from one of the master node, to user on one of the slave node. For ex: to geoaccount
7. Create geo-rep relationship between master and slave to the user
for ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol create push-pem
8. In the slavenode which is used to create relationship, run /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh as a root with
user name as argument. Ex: # /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh geoaccount
9. Start the geo-rep with slave user
Ex: gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol start

10. check the status "gluster volume geo-rep MASTERNODE geoaccount@SLAVENODE::slavevol status"


Actual results: Status doesn't show to which user the geo-rep has established the setups 


Expected results: Status should show to which user it has connected. 


Additional info:

Comment 1 Anand Avati 2015-01-09 09:39:37 UTC
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#2) for review on master by Aravinda VK (avishwan)

Comment 2 Anand Avati 2015-01-21 17:36:38 UTC
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#3) for review on master by Aravinda VK (avishwan)

Comment 3 Anand Avati 2015-01-28 08:49:39 UTC
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#4) for review on master by Aravinda VK (avishwan)

Comment 4 Anand Avati 2015-02-02 20:19:00 UTC
REVIEW: http://review.gluster.org/9409 (geo-rep: Adding Slave user field to georep status) posted (#163) for review on master by Vijay Bellur (vbellur)

Comment 5 Anand Avati 2015-02-02 20:19:10 UTC
COMMIT: http://review.gluster.org/9409 committed in master by Vijay Bellur (vbellur) 
------
commit 52765ad94f233e4b4e57515e49f5b7c9e7c08b43
Author: Aravinda VK <avishwan>
Date:   Wed Jan 7 21:56:58 2015 +0530

    geo-rep: Adding Slave user field to georep status
    
    New column introduced in Status output, "SLAVE USER",
    Slave user is not "root" in non root Geo-replication setup.
    
    Added additional tag in XML output <slave_user>
    
    BUG: 1180459
    Change-Id: Ia48a5a8eb892ce883b9ec114be7bb2d46eff8535
    Signed-off-by: Aravinda VK <avishwan>
    Reviewed-on: http://review.gluster.org/9409
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Kotresh HR <khiremat>
    Reviewed-by: Avra Sengupta <asengupt>
    Reviewed-by: Venky Shankar <vshankar>
    Tested-by: Venky Shankar <vshankar>
    Reviewed-by: Vijay Bellur <vbellur>
    Tested-by: Vijay Bellur <vbellur>

Comment 6 Niels de Vos 2015-05-14 17:26:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:28:17 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 8 Niels de Vos 2015-05-14 17:35:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.