Bug 1230269 - [SELinux]: [geo-rep]: RHEL7.1 can not initialize the geo-rep session between master and slave volume, Permission Denied
Summary: [SELinux]: [geo-rep]: RHEL7.1 can not initialize the geo-rep session between ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On: 1230369 1232755
Blocks: 1202842 1212796 1223636
TreeView+ depends on / blocked
 
Reported: 2015-06-10 14:11 UTC by Rahul Hinduja
Modified: 2015-07-29 09:39 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.1-5
Doc Type: Bug Fix
Doc Text:
Previously, initializing a geo-replication session between a master cluster and a slave cluster failed when SELinux was in enforcing mode. This update modifies how the ssh-keygen service handles the /var/lib/glusterd/geo-replication/secret.pem file, and the geo-replication session is now initialized successfully.
Clone Of:
: 1230369 (view as bug list)
Environment:
Last Closed: 2015-07-29 05:01:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Rahul Hinduja 2015-06-10 14:11:21 UTC
Description of problem:
=======================

ON RHEL7.1, the cli to start the geo-rep session is successful but the status always shows as "Created" as follows:

[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave start
Starting geo-replication session between master & rhsqe-vm03.lab.eng.blr.redhat.com::slave has been successful
[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
[root@rhsqe-vm01 ~]# date
Wed Jun 10 19:51:08 IST 2015
[root@rhsqe-vm01 ~]# date
Wed Jun 10 19:51:34 IST 2015
[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
[root@rhsqe-vm01 ~]#

audit.log shows lots of denial during this operation as:
=========================================================

am_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1433946301.575:546): pid=19747 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=10 res=1
type=USER_START msg=audit(1433946301.616:547): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1433946301.617:548): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1433946301.641:549): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1433946301.647:550): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=AVC msg=audit(1433946302.683:551): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946302.683:551): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8412120 a2=7f7bb8412120 a3=d items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946304.400:552): avc:  denied  { create } for  pid=19816 comm="python" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=rawip_socket
type=SYSCALL msg=audit(1433946304.400:552): arch=c000003e syscall=41 success=no exit=-13 a0=2 a1=3 a2=1 a3=0 items=0 ppid=1 pid=19816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python2.7" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946309.068:553): avc:  denied  { getattr } for  pid=13364 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946309.068:553): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210030 a2=7f7bb8210030 a3=a items=0 ppid=1 pid=13364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.259:554): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.259:554): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.272:555): avc:  denied  { getattr } for  pid=13364 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.272:555): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.294:556): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.294:556): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)


[root@rhsqe-vm01 ~]# cat /var/log/audit/audit.log |audit2allow 


#============= glusterd_t ==============
allow glusterd_t fsadm_exec_t:file execute;
allow glusterd_t glusterd_var_lib_t:file execute;

#!!!! This avc can be allowed using the boolean 'authlogin_nsswitch_use_ldap'
allow glusterd_t random_device_t:chr_file getattr;
allow glusterd_t self:rawip_socket create;
allow glusterd_t ssh_keygen_exec_t:file execute;
[root@rhsqe-vm01 ~]# 


[root@rhsqe-vm01 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-1.el7rhgs.x86_64
glusterfs-cli-3.7.1-1.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-1.el7rhgs.x86_64
glusterfs-rdma-3.7.1-1.el7rhgs.x86_64
vdsm-gluster-4.16.16-1.3.el7rhgs.noarch
glusterfs-libs-3.7.1-1.el7rhgs.x86_64
glusterfs-fuse-3.7.1-1.el7rhgs.x86_64
glusterfs-server-3.7.1-1.el7rhgs.x86_64
glusterfs-api-3.7.1-1.el7rhgs.x86_64
glusterfs-3.7.1-1.el7rhgs.x86_64
glusterfs-debuginfo-3.7.1-1.el7rhgs.x86_64


[root@rhsqe-vm01 ~]# rpm -qa | grep selinux
selinux-policy-3.13.1-25.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-25.el7.noarch
libselinux-2.2.2-6.el7.x86_64
[root@rhsqe-vm01 ~]# 

Steps to Reproduce:
====================
1. Create Master and Slave cluster
2. Create/Start Master and Slave volume
3. Create and Start meta volume
4. Create geo-rep session between master and slave volume
5. Start the geo-rep session.

Actual results:
===============

Start is successful, but the status shows only CREATED

Additional info:
================

Will update sosreports and audit.log file

Comment 2 Rahul Hinduja 2015-06-10 14:25:50 UTC
georep logs shows: Permission Denied as

error: [Errno 13] Permission denied
[2015-06-10 19:47:19.11940] I [syncdutils(monitor):220:finalize] <top>: exiting.
[2015-06-10 19:50:32.499176] I [monitor(monitor):362:distribute] <top>: slave bricks: [{'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}]
[2015-06-10 19:50:32.503000] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 647, in main_i
    return monitor(*rscs)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 395, in monitor
    return Monitor().multiplex(*distribute(*resources))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 382, in distribute
    if is_host_local(brick['host'])]
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 402, in is_host_local
    s = socket.socket(ai[0], socket.SOCK_RAW, socket.IPPROTO_ICMP)
  File "/usr/lib64/python2.7/socket.py", line 187, in __init__
    _sock = _realsocket(family, type, proto)
error: [Errno 13] Permission denied
[2015-06-10 19:50:32.505733] I [syncdutils(monitor):220:finalize] <top>: exiting.
[2015-06-10 19:55:04.398649] I [monitor(monitor):362:distribute] <top>: slave bricks: [{'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}]
[2015-06-10 19:55:04.401369] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 647, in main_i
    return monitor(*rscs)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 395, in monitor
    return Monitor().multiplex(*distribute(*resources))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 382, in distribute
    if is_host_local(brick['host'])]
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 402, in is_host_local
    s = socket.socket(ai[0], socket.SOCK_RAW, socket.IPPROTO_ICMP)
  File "/usr/lib64/python2.7/socket.py", line 187, in __init__
    _sock = _realsocket(family, type, proto)
error: [Errno 13] Permission denied
[2015-06-10 19:55:04.403840] I [syncdutils(monitor):220:finalize] <top>: exiting.
(END)

Comment 6 Miroslav Grepl 2015-06-15 16:15:06 UTC
Should be fixed in selinux-policy-targeted-3.13.1-27.el7.noarch

Comment 10 Rahul Hinduja 2015-07-06 08:42:30 UTC
Verified with the build: 

gluster: glusterfs-libs-3.7.1-7.el7rhgs.x86_64
selinux: selinux-policy-3.13.1-30.el7.noarch

Able to successfully create the geo-rep session and start it with the SELinux in enforced mode. 

No Permission error in geo-rep logs:

[root@rhsqe-vm01 scripts]# grep -i "permission" /var/log/glusterfs/geo-replication/master/ssh%3A%2F%2Froot%4010.70.41.103%3Agluster%3A%2F%2F127.0.0.1%3Aslave.log 
[root@rhsqe-vm01 scripts]# 

[root@rhsqe-vm01 scripts]# cat /var/log/audit/audit.log|audit2allow

#============= glusterd_t ==============
allow glusterd_t showmount_exec_t:file execute;
[root@rhsqe-vm01 scripts]# 

No python or geo-rep avc logged. showmount_exec avc is known and tracked separately. Moving the bug to verified state.

Comment 12 errata-xmlrpc 2015-07-29 05:01:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.