Bug 1230369

Summary: [SELinux]: [geo-rep]: SELinux policy updates required in RHEL-7.1 for geo-rep
Product: Red Hat Enterprise Linux 7 Reporter: Prasanth <pprakash>
Component: selinux-policyAssignee: Miroslav Grepl <mgrepl>
Status: CLOSED ERRATA QA Contact: Milos Malik <mmalik>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 7.1CC: chrisw, csaba, jherrman, jkurik, lvrabec, mgrepl, mmalik, nlevinki, plautrba, pprakash, pvrabec, rhinduja, rhs-bugs, ssekidde, storage-qa-internal
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: selinux-policy-3.13.1-29.el7 Doc Type: Bug Fix
Doc Text:
Previously, initializing a geo-replication session between a master cluster and a slave cluster failed when SELinux was in enforcing mode. This update modifies how the ssh-keygen service handles the /var/lib/glusterd/geo-replication/secret.pem file, and the geo-replication session is now initialized successfully.
Story Points: ---
Clone Of: 1230269
: 1232755 (view as bug list) Environment:
Last Closed: 2015-11-19 10:36:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1212796, 1223636, 1230269, 1232755    

Description Prasanth 2015-06-10 17:50:13 UTC
+++ This bug was initially created as a clone of Bug #1230269 +++

Description of problem:
=======================

ON RHEL7.1, the cli to start the geo-rep session is successful but the status always shows as "Created" as follows:

[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave start
Starting geo-replication session between master & rhsqe-vm03.lab.eng.blr.redhat.com::slave has been successful
[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
[root@rhsqe-vm01 ~]# date
Wed Jun 10 19:51:08 IST 2015
[root@rhsqe-vm01 ~]# date
Wed Jun 10 19:51:34 IST 2015
[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
[root@rhsqe-vm01 ~]#

audit.log shows lots of denial during this operation as:
=========================================================

am_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1433946301.575:546): pid=19747 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=10 res=1
type=USER_START msg=audit(1433946301.616:547): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1433946301.617:548): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1433946301.641:549): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1433946301.647:550): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=AVC msg=audit(1433946302.683:551): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946302.683:551): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8412120 a2=7f7bb8412120 a3=d items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946304.400:552): avc:  denied  { create } for  pid=19816 comm="python" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=rawip_socket
type=SYSCALL msg=audit(1433946304.400:552): arch=c000003e syscall=41 success=no exit=-13 a0=2 a1=3 a2=1 a3=0 items=0 ppid=1 pid=19816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python2.7" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946309.068:553): avc:  denied  { getattr } for  pid=13364 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946309.068:553): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210030 a2=7f7bb8210030 a3=a items=0 ppid=1 pid=13364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.259:554): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.259:554): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.272:555): avc:  denied  { getattr } for  pid=13364 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.272:555): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.294:556): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.294:556): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)


[root@rhsqe-vm01 ~]# cat /var/log/audit/audit.log |audit2allow 


#============= glusterd_t ==============
allow glusterd_t fsadm_exec_t:file execute;
allow glusterd_t glusterd_var_lib_t:file execute;

#!!!! This avc can be allowed using the boolean 'authlogin_nsswitch_use_ldap'
allow glusterd_t random_device_t:chr_file getattr;
allow glusterd_t self:rawip_socket create;
allow glusterd_t ssh_keygen_exec_t:file execute;
[root@rhsqe-vm01 ~]# 


[root@rhsqe-vm01 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-1.el7rhgs.x86_64
glusterfs-cli-3.7.1-1.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-1.el7rhgs.x86_64
glusterfs-rdma-3.7.1-1.el7rhgs.x86_64
vdsm-gluster-4.16.16-1.3.el7rhgs.noarch
glusterfs-libs-3.7.1-1.el7rhgs.x86_64
glusterfs-fuse-3.7.1-1.el7rhgs.x86_64
glusterfs-server-3.7.1-1.el7rhgs.x86_64
glusterfs-api-3.7.1-1.el7rhgs.x86_64
glusterfs-3.7.1-1.el7rhgs.x86_64
glusterfs-debuginfo-3.7.1-1.el7rhgs.x86_64


[root@rhsqe-vm01 ~]# rpm -qa | grep selinux
selinux-policy-3.13.1-25.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-25.el7.noarch
libselinux-2.2.2-6.el7.x86_64
[root@rhsqe-vm01 ~]# 

Steps to Reproduce:
====================
1. Create Master and Slave cluster
2. Create/Start Master and Slave volume
3. Create and Start meta volume
4. Create geo-rep session between master and slave volume
5. Start the geo-rep session.

Actual results:
===============

Start is successful, but the status shows only CREATED

Additional info:
================

Will update sosreports and audit.log file

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-06-10 10:11:21 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Rahul Hinduja on 2015-06-10 10:25:50 EDT ---

georep logs shows: Permission Denied as

error: [Errno 13] Permission denied
[2015-06-10 19:47:19.11940] I [syncdutils(monitor):220:finalize] <top>: exiting.
[2015-06-10 19:50:32.499176] I [monitor(monitor):362:distribute] <top>: slave bricks: [{'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}]
[2015-06-10 19:50:32.503000] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 647, in main_i
    return monitor(*rscs)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 395, in monitor
    return Monitor().multiplex(*distribute(*resources))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 382, in distribute
    if is_host_local(brick['host'])]
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 402, in is_host_local
    s = socket.socket(ai[0], socket.SOCK_RAW, socket.IPPROTO_ICMP)
  File "/usr/lib64/python2.7/socket.py", line 187, in __init__
    _sock = _realsocket(family, type, proto)
error: [Errno 13] Permission denied
[2015-06-10 19:50:32.505733] I [syncdutils(monitor):220:finalize] <top>: exiting.
[2015-06-10 19:55:04.398649] I [monitor(monitor):362:distribute] <top>: slave bricks: [{'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}]
[2015-06-10 19:55:04.401369] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 647, in main_i
    return monitor(*rscs)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 395, in monitor
    return Monitor().multiplex(*distribute(*resources))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 382, in distribute
    if is_host_local(brick['host'])]
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 402, in is_host_local
    s = socket.socket(ai[0], socket.SOCK_RAW, socket.IPPROTO_ICMP)
  File "/usr/lib64/python2.7/socket.py", line 187, in __init__
    _sock = _realsocket(family, type, proto)
error: [Errno 13] Permission denied
[2015-06-10 19:55:04.403840] I [syncdutils(monitor):220:finalize] <top>: exiting.
(END)

--- Additional comment from Rahul Hinduja on 2015-06-10 10:29:29 EDT ---

sosreports at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1230269/

Master :

[root@rhsqe-vm01 ~]# gluster volume info
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: bb49eb5d-a024-4fca-ba3b-ea14b36ac0bc
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhsqe-vm01.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick2: rhsqe-vm02.lab.eng.blr.redhat.com:/rhs/brick3/b3
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: master
Type: Distributed-Replicate
Volume ID: c649b696-8a07-44ca-a9a5-5a32eaf0d4a5
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhsqe-vm01.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick2: rhsqe-vm02.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick3: rhsqe-vm01.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick4: rhsqe-vm02.lab.eng.blr.redhat.com:/rhs/brick2/b2
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
[root@rhsqe-vm01 ~]# 


Slave:
======
[root@rhsqe-vm03 ~]# gluster volume info
 
Volume Name: slave
Type: Distributed-Replicate
Volume ID: 65efffa0-1750-441c-86e4-7136ad13b015
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhsqe-vm03.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick2: rhsqe-vm04.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick3: rhsqe-vm03.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick4: rhsqe-vm04.lab.eng.blr.redhat.com:/rhs/brick2/b2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqe-vm03 ~]#

--- Additional comment from Rahul Hinduja on 2015-06-10 10:31:34 EDT ---

There are numerous entries of AVC and they can be found under /var/log/audit/audit.log of each sosreport

Comment 2 Milos Malik 2015-06-11 10:33:11 UTC
Based on AVCs, there is a python script which manipulates network. Where does it come from?

Comment 18 Miroslav Grepl 2015-06-17 12:28:18 UTC
commit 89b81a5cff772c193b50e5fea8a209aad83b0e76
Author: Miroslav Grepl <mgrepl>
Date:   Wed Jun 17 11:19:25 2015 +0200

    We allow can_exec() on ssh_keygen on gluster. But there is a transition defined by init_initrc_domain() because we need to allow execute unconfined services by glusterd. So ssh-keygen ends up with ssh_keygen_t and we need to allow to manage /var/lib/glusterd/geo-replication/secret.pem.

Comment 24 errata-xmlrpc 2015-11-19 10:36:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2300.html