RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1230369 - [SELinux]: [geo-rep]: SELinux policy updates required in RHEL-7.1 for geo-rep
Summary: [SELinux]: [geo-rep]: SELinux policy updates required in RHEL-7.1 for geo-rep
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: 1212796 1223636 1230269 1232755
TreeView+ depends on / blocked
 
Reported: 2015-06-10 17:50 UTC by Prasanth
Modified: 2015-11-19 10:36 UTC (History)
15 users (show)

Fixed In Version: selinux-policy-3.13.1-29.el7
Doc Type: Bug Fix
Doc Text:
Previously, initializing a geo-replication session between a master cluster and a slave cluster failed when SELinux was in enforcing mode. This update modifies how the ssh-keygen service handles the /var/lib/glusterd/geo-replication/secret.pem file, and the geo-replication session is now initialized successfully.
Clone Of: 1230269
: 1232755 (view as bug list)
Environment:
Last Closed: 2015-11-19 10:36:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2300 0 normal SHIPPED_LIVE selinux-policy bug fix update 2015-11-19 09:55:26 UTC

Description Prasanth 2015-06-10 17:50:13 UTC
+++ This bug was initially created as a clone of Bug #1230269 +++

Description of problem:
=======================

ON RHEL7.1, the cli to start the geo-rep session is successful but the status always shows as "Created" as follows:

[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave start
Starting geo-replication session between master & rhsqe-vm03.lab.eng.blr.redhat.com::slave has been successful
[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
[root@rhsqe-vm01 ~]# date
Wed Jun 10 19:51:08 IST 2015
[root@rhsqe-vm01 ~]# date
Wed Jun 10 19:51:34 IST 2015
[root@rhsqe-vm01 ~]# gluster volume geo-replication master rhsqe-vm03.lab.eng.blr.redhat.com::slave status
 
MASTER NODE                          MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                                       SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm01.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick1/b1    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
rhsqe-vm02.lab.eng.blr.redhat.com    master        /rhs/brick2/b2    root          rhsqe-vm03.lab.eng.blr.redhat.com::slave    N/A           Created    N/A             N/A                  
[root@rhsqe-vm01 ~]#

audit.log shows lots of denial during this operation as:
=========================================================

am_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1433946301.575:546): pid=19747 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old-auid=4294967295 auid=0 old-ses=4294967295 ses=10 res=1
type=USER_START msg=audit(1433946301.616:547): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_REFR msg=audit(1433946301.617:548): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1433946301.641:549): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1433946301.647:550): pid=19747 uid=0 auid=0 ses=10 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits,pam_systemd acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=AVC msg=audit(1433946302.683:551): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946302.683:551): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8412120 a2=7f7bb8412120 a3=d items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946304.400:552): avc:  denied  { create } for  pid=19816 comm="python" scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:system_r:glusterd_t:s0 tclass=rawip_socket
type=SYSCALL msg=audit(1433946304.400:552): arch=c000003e syscall=41 success=no exit=-13 a0=2 a1=3 a2=1 a3=0 items=0 ppid=1 pid=19816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python2.7" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946309.068:553): avc:  denied  { getattr } for  pid=13364 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946309.068:553): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210030 a2=7f7bb8210030 a3=a items=0 ppid=1 pid=13364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.259:554): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.259:554): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.272:555): avc:  denied  { getattr } for  pid=13364 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.272:555): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1433946319.294:556): avc:  denied  { getattr } for  pid=13365 comm="glusterd" path="/dev/random" dev="devtmpfs" ino=1032 scontext=system_u:system_r:glusterd_t:s0 tcontext=system_u:object_r:random_device_t:s0 tclass=chr_file
type=SYSCALL msg=audit(1433946319.294:556): arch=c000003e syscall=4 success=no exit=-13 a0=7f7bdad82ba4 a1=7f7bb8210140 a2=7f7bb8210140 a3=a items=0 ppid=1 pid=13365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=system_u:system_r:glusterd_t:s0 key=(null)


[root@rhsqe-vm01 ~]# cat /var/log/audit/audit.log |audit2allow 


#============= glusterd_t ==============
allow glusterd_t fsadm_exec_t:file execute;
allow glusterd_t glusterd_var_lib_t:file execute;

#!!!! This avc can be allowed using the boolean 'authlogin_nsswitch_use_ldap'
allow glusterd_t random_device_t:chr_file getattr;
allow glusterd_t self:rawip_socket create;
allow glusterd_t ssh_keygen_exec_t:file execute;
[root@rhsqe-vm01 ~]# 


[root@rhsqe-vm01 ~]# rpm -qa | grep gluster
glusterfs-client-xlators-3.7.1-1.el7rhgs.x86_64
glusterfs-cli-3.7.1-1.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-1.el7rhgs.x86_64
glusterfs-rdma-3.7.1-1.el7rhgs.x86_64
vdsm-gluster-4.16.16-1.3.el7rhgs.noarch
glusterfs-libs-3.7.1-1.el7rhgs.x86_64
glusterfs-fuse-3.7.1-1.el7rhgs.x86_64
glusterfs-server-3.7.1-1.el7rhgs.x86_64
glusterfs-api-3.7.1-1.el7rhgs.x86_64
glusterfs-3.7.1-1.el7rhgs.x86_64
glusterfs-debuginfo-3.7.1-1.el7rhgs.x86_64


[root@rhsqe-vm01 ~]# rpm -qa | grep selinux
selinux-policy-3.13.1-25.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-25.el7.noarch
libselinux-2.2.2-6.el7.x86_64
[root@rhsqe-vm01 ~]# 

Steps to Reproduce:
====================
1. Create Master and Slave cluster
2. Create/Start Master and Slave volume
3. Create and Start meta volume
4. Create geo-rep session between master and slave volume
5. Start the geo-rep session.

Actual results:
===============

Start is successful, but the status shows only CREATED

Additional info:
================

Will update sosreports and audit.log file

--- Additional comment from Red Hat Bugzilla Rules Engine on 2015-06-10 10:11:21 EDT ---

This bug is automatically being proposed for Red Hat Gluster Storage 3.1.0 by setting the release flag 'rhgs‑3.1.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Rahul Hinduja on 2015-06-10 10:25:50 EDT ---

georep logs shows: Permission Denied as

error: [Errno 13] Permission denied
[2015-06-10 19:47:19.11940] I [syncdutils(monitor):220:finalize] <top>: exiting.
[2015-06-10 19:50:32.499176] I [monitor(monitor):362:distribute] <top>: slave bricks: [{'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}]
[2015-06-10 19:50:32.503000] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 647, in main_i
    return monitor(*rscs)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 395, in monitor
    return Monitor().multiplex(*distribute(*resources))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 382, in distribute
    if is_host_local(brick['host'])]
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 402, in is_host_local
    s = socket.socket(ai[0], socket.SOCK_RAW, socket.IPPROTO_ICMP)
  File "/usr/lib64/python2.7/socket.py", line 187, in __init__
    _sock = _realsocket(family, type, proto)
error: [Errno 13] Permission denied
[2015-06-10 19:50:32.505733] I [syncdutils(monitor):220:finalize] <top>: exiting.
[2015-06-10 19:55:04.398649] I [monitor(monitor):362:distribute] <top>: slave bricks: [{'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick1/b1'}, {'host': 'rhsqe-vm03.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}, {'host': 'rhsqe-vm04.lab.eng.blr.redhat.com', 'dir': '/rhs/brick2/b2'}]
[2015-06-10 19:55:04.401369] E [syncdutils(monitor):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 647, in main_i
    return monitor(*rscs)
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 395, in monitor
    return Monitor().multiplex(*distribute(*resources))
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 382, in distribute
    if is_host_local(brick['host'])]
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 402, in is_host_local
    s = socket.socket(ai[0], socket.SOCK_RAW, socket.IPPROTO_ICMP)
  File "/usr/lib64/python2.7/socket.py", line 187, in __init__
    _sock = _realsocket(family, type, proto)
error: [Errno 13] Permission denied
[2015-06-10 19:55:04.403840] I [syncdutils(monitor):220:finalize] <top>: exiting.
(END)

--- Additional comment from Rahul Hinduja on 2015-06-10 10:29:29 EDT ---

sosreports at: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1230269/

Master :

[root@rhsqe-vm01 ~]# gluster volume info
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: bb49eb5d-a024-4fca-ba3b-ea14b36ac0bc
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhsqe-vm01.lab.eng.blr.redhat.com:/rhs/brick3/b3
Brick2: rhsqe-vm02.lab.eng.blr.redhat.com:/rhs/brick3/b3
Options Reconfigured:
performance.readdir-ahead: on
 
Volume Name: master
Type: Distributed-Replicate
Volume ID: c649b696-8a07-44ca-a9a5-5a32eaf0d4a5
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhsqe-vm01.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick2: rhsqe-vm02.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick3: rhsqe-vm01.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick4: rhsqe-vm02.lab.eng.blr.redhat.com:/rhs/brick2/b2
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
performance.readdir-ahead: on
[root@rhsqe-vm01 ~]# 


Slave:
======
[root@rhsqe-vm03 ~]# gluster volume info
 
Volume Name: slave
Type: Distributed-Replicate
Volume ID: 65efffa0-1750-441c-86e4-7136ad13b015
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: rhsqe-vm03.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick2: rhsqe-vm04.lab.eng.blr.redhat.com:/rhs/brick1/b1
Brick3: rhsqe-vm03.lab.eng.blr.redhat.com:/rhs/brick2/b2
Brick4: rhsqe-vm04.lab.eng.blr.redhat.com:/rhs/brick2/b2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqe-vm03 ~]#

--- Additional comment from Rahul Hinduja on 2015-06-10 10:31:34 EDT ---

There are numerous entries of AVC and they can be found under /var/log/audit/audit.log of each sosreport

Comment 2 Milos Malik 2015-06-11 10:33:11 UTC
Based on AVCs, there is a python script which manipulates network. Where does it come from?

Comment 18 Miroslav Grepl 2015-06-17 12:28:18 UTC
commit 89b81a5cff772c193b50e5fea8a209aad83b0e76
Author: Miroslav Grepl <mgrepl>
Date:   Wed Jun 17 11:19:25 2015 +0200

    We allow can_exec() on ssh_keygen on gluster. But there is a transition defined by init_initrc_domain() because we need to allow execute unconfined services by glusterd. So ssh-keygen ends up with ssh_keygen_t and we need to allow to manage /var/lib/glusterd/geo-replication/secret.pem.

Comment 24 errata-xmlrpc 2015-11-19 10:36:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2300.html


Note You need to log in before you can comment on or make changes to this bug.