Bug 1235360 - [geo-rep]: Mountbroker setup goes to Faulty with ssh 'Permission Denied' Errors
Summary: [geo-rep]: Mountbroker setup goes to Faulty with ssh 'Permission Denied' Errors
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1235359
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-06-24 15:17 UTC by Kotresh HR
Modified: 2015-07-30 09:48 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.7.3
Clone Of: 1235359
Environment:
Last Closed: 2015-07-30 09:48:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2015-06-24 15:17:18 UTC
+++ This bug was initially created as a clone of Bug #1235359 +++

Description of problem:
In Geo-Rep Mount broker setup, workers goes to faulty with ssh 'Permission Denied'
Errors

Version-Release number of selected component (if applicable):
mainline

How reproducible:


Steps to Reproduce:
1. Setup master volume and start
2. Setup slave volume and start
3. Follow mountbroker setup documentation and create geo-rep session.
4. Start geo-rep session

Actual results:
Session goes to Faulty

Expected results:
Session should not go to faulty

Additional info:

Comment 1 Anand Avati 2015-06-24 15:18:58 UTC
REVIEW: http://review.gluster.org/11384 (geo-rep: Fix ssh issue in geo-rep) posted (#1) for review on release-3.7 by Kotresh HR (khiremat)

Comment 2 Anand Avati 2015-06-26 12:46:12 UTC
COMMIT: http://review.gluster.org/11384 committed in release-3.7 by Venky Shankar (vshankar) 
------
commit ded677270c13e10c60793e7f0f29b2538beccfab
Author: Kotresh HR <khiremat>
Date:   Wed Jun 24 20:00:11 2015 +0530

    geo-rep: Fix ssh issue in geo-rep
    
    In geo-rep mountbroker setup, workers fails with
    'Permission Denied' even though the public keys
    are shared to all the slave nodes. The issue
    is with selinux context not being set for .ssh
    and .ssh/authorizedkeys. Doing restorecon on
    these entries to set default selinux security
    context fixes the issue.
    
    BUG: 1235360
    Reviewed-On: http://review.gluster.org/11383
    Change-Id: Idd78e4b183bbeb196d00ee3d521c56d0985ca5d9
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/11384
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Venky Shankar <vshankar>

Comment 3 Kaushal 2015-07-30 09:48:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.