Bug 1214258 - [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from unlink access on the sock_file /var/run/glusterd.socket
Summary: [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from unlink...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: Alpha
: RHGS 3.1.0
Assignee: Anand Nekkunti
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1224639
Blocks: 1202842 1212796
TreeView+ depends on / blocked
 
Reported: 2015-04-22 10:08 UTC by Prasanth
Modified: 2015-07-29 04:41 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:41:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Prasanth 2015-04-22 10:08:43 UTC
Description of problem:

SELinux is preventing /usr/sbin/glusterfsd from write access on the sock_file /var/run/glusterd.socket

See AVC messages from /var/log/audit/audit.log below:

######
type=AVC msg=audit(1429685833.450:38): avc:  denied  { unlink } for  pid=2075 comm="glusterd" name="glusterd.socket" dev=dm-0 ino=657155 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1429685833.450:38): arch=c000003e syscall=87 success=yes exit=0 a0=7fffce1f34e2 a1=7fffce1f34e0 a2=6f a3=7faa065f3753 items=0 ppid=2074 pid=2075 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
######


Version-Release number of selected component (if applicable):
#####
glusterfs-fuse-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-cli-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-server-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-libs-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-api-3.7dev-0.1009.git8b987be.el6.x86_64
samba-vfs-glusterfs-4.1.17-4.el6rhs.x86_64
#####

How reproducible: Always


Steps to Reproduce:
1. Install the RHEL6 glusterfs 3.7 nightly builds from http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/epel-6-x86_64/
2. Check for the AVC's in /var/log/audit/audit.log
3.

Actual results: Above mentioned AVC is seen in the logs.


Expected results: If you believe that glusterfsd should be allowed unlink access on the glusterd.socket sock_file by default, please consider fixing it.

Comment 1 Milos Malik 2015-05-12 12:59:08 UTC
The socket is mislabeled:

# restorecon -Rv /var/run/glusterd.socket

Comment 3 Milos Malik 2015-05-21 10:50:37 UTC
Correct label for the socket is:

# matchpathcon /var/run/glusterd.socket 
/var/run/glusterd.socket	system_u:object_r:glusterd_var_run_t:s0
#

and allow rule for the operation is present:

# sesearch -s glusterd_t -t glusterd_var_run_t -c sock_file -p unlink -A -C
Found 1 semantic av rules:
   allow glusterd_t glusterd_var_run_t : sock_file { ioctl read write create getattr setattr lock append unlink link rename open } ; 

#

Comment 5 Prasanth 2015-05-22 06:59:49 UTC
These AVC's are generated only after we manually try to start 'glusterd' after the rpm installation. Based on my testing, what I understand is that, if we do a proper clean-up after rpm installation, '/var/run/glusterd.socket' file wouldn't exist in the system and throw these AVC's after we start glusterd manually. 

The 'glusterd.socket' file is first created by rpm scriptlet as part of a start and stop operation done in the post upgrade script to re-generate the configuration files. During the process it gets a wrong label of "var_run_t" as we don't have filename transition rules in RHEL-6. So the write access [1] and unlink access [2] required on the sock_file '/var/run/glusterd.socket' while manually starting 'glusterd' is prevented by SELinux, which is what we see in AVC's. However, the thing to be noted is that, on starting 'glusterd' using '#service glusterd start' or '#/etc/init.d/glusterd start', it actually regains the right label of "glusterd_var_run_t". 

So the fix that you posted in [3] is actually trying to do a 'restorecon' on the leftover 'glusterd.socket' file to avoid these AVC's. But instead, if we actually do a proper clean-up post rpm installation, this file that leads to this situation, wouldn't exists at all. Please go through my comment and see if my understand is correct and it makes sense. Meanwhile, i'll open a new BZ for cleaning up the left-over socket file!

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1214253

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1214258

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1210404


Additional Info:
#####
[root@dhcp42-246 run]# rpm -qa |grep gluster
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64

[root@dhcp42-246 run]# /etc/init.d/glusterd status
glusterd is stopped

[root@dhcp42-246 run]# ls -lZ glusterd.socket 
srwxr-xr-x. root root unconfined_u:object_r:var_run_t:s0 glusterd.socket

[root@dhcp42-246 run]# /etc/init.d/glusterd start
Starting glusterd:                                         [  OK  ]

[root@dhcp42-246 run]# ls -lZ glusterd.socket 
srwxr-xr-x. root root unconfined_u:object_r:glusterd_var_run_t:s0 glusterd.socket

[root@dhcp42-246 run]# /etc/init.d/glusterd status
glusterd (pid  5278) is running...
#####

-Prasanth

Comment 7 Atin Mukherjee 2015-06-02 04:17:15 UTC
Anand/Prasanth,

Can we mark this bug duplicate of 1223185 ?

Comment 8 Prasanth 2015-06-12 06:57:32 UTC
Anand,

If you are sure that this BZ is also fixed by the patch provided in Bug 1224639, you may have to move this BZ to ON_QA with the FIV for further QE verification.

Thanks,
Prasanth

Comment 9 Anand Nekkunti 2015-06-12 07:10:09 UTC
Yes ... This patch https://code.engineering.redhat.com/gerrit/#/c/49604/ fixes the this bug. Fix is available in next build.

Comment 11 Prasanth 2015-06-17 05:43:54 UTC
Verified as fixed in glusterfs-3.7.1-3.el6rhs.x86_64

Comment 12 errata-xmlrpc 2015-07-29 04:41:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.