Bug 1214253 - [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from write access on the sock_file /var/run/glusterd.socket
Summary: [SELinux] [glusterfsd] SELinux is preventing /usr/sbin/glusterfsd from write ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Anand Nekkunti
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1224639
Blocks: 1202842 1212796
TreeView+ depends on / blocked
 
Reported: 2015-04-22 10:02 UTC by Prasanth
Modified: 2015-07-29 04:41 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:41:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Prasanth 2015-04-22 10:02:36 UTC
Description of problem:

SELinux is preventing /usr/sbin/glusterfsd from write access on the sock_file /var/run/glusterd.socket

See AVC messages from /var/log/audit/audit.log below:

######
type=AVC msg=audit(1429685833.449:37): avc:  denied  { write } for  pid=2075 comm="glusterd" name="glusterd.socket" dev=dm-0 ino=657155 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
type=SYSCALL msg=audit(1429685833.449:37): arch=c000003e syscall=42 success=no exit=-111 a0=c a1=7fffce1f34e0 a2=6e a3=7faa065f3753 items=0 ppid=2074 pid=2075 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=2 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
######

Version-Release number of selected component (if applicable):
#####
glusterfs-fuse-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-cli-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-server-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-libs-3.7dev-0.1009.git8b987be.el6.x86_64
glusterfs-api-3.7dev-0.1009.git8b987be.el6.x86_64
samba-vfs-glusterfs-4.1.17-4.el6rhs.x86_64
#####

How reproducible: Always


Steps to Reproduce:
1. Install the RHEL6 glusterfs 3.7 nightly builds from http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/epel-6-x86_64/
2. Check for the AVC's in /var/log/audit/audit.log
3.

Actual results: Above mentioned AVC is seen in the logs.


Expected results: If you believe that glusterfsd should be allowed write access on the glusterd.socket sock_file by default, please consider fixing it.

Comment 1 Milos Malik 2015-05-12 12:58:28 UTC
The socket is mislabeled:

# restorecon -Rv /var/run/glusterd.socket

Comment 2 Miroslav Grepl 2015-05-13 12:15:44 UTC
We don't have a good way to fix it in RHEL6. We define filename transition rules in RHEL7 to make it working.

We need to run 

restorecon 

in gluster.

Comment 3 Prasanth 2015-05-22 06:59:47 UTC
These AVC's are generated only after we manually try to start 'glusterd' after the rpm installation. Based on my testing, what I understand is that, if we do a proper clean-up after rpm installation, '/var/run/glusterd.socket' file wouldn't exist in the system and throw these AVC's after we start glusterd manually. 

The 'glusterd.socket' file is first created by rpm scriptlet as part of a start and stop operation done in the post upgrade script to re-generate the configuration files. During the process it gets a wrong label of "var_run_t" as we don't have filename transition rules in RHEL-6. So the write access [1] and unlink access [2] required on the sock_file '/var/run/glusterd.socket' while manually starting 'glusterd' is prevented by SELinux, which is what we see in AVC's. However, the thing to be noted is that, on starting 'glusterd' using '#service glusterd start' or '#/etc/init.d/glusterd start', it actually regains the right label of "glusterd_var_run_t". 

So the fix that you posted in [3] is actually trying to do a 'restorecon' on the leftover 'glusterd.socket' file to avoid these AVC's. But instead, if we actually do a proper clean-up post rpm installation, this file that leads to this situation, wouldn't exists at all. Please go through my comment and see if my understand is correct and it makes sense. Meanwhile, i'll open a new BZ for cleaning up the left-over socket file!

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1214253

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1214258

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1210404


Additional Info:
#####
[root@dhcp42-246 run]# rpm -qa |grep gluster
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64

[root@dhcp42-246 run]# /etc/init.d/glusterd status
glusterd is stopped

[root@dhcp42-246 run]# ls -lZ glusterd.socket 
srwxr-xr-x. root root unconfined_u:object_r:var_run_t:s0 glusterd.socket

[root@dhcp42-246 run]# /etc/init.d/glusterd start
Starting glusterd:                                         [  OK  ]

[root@dhcp42-246 run]# ls -lZ glusterd.socket 
srwxr-xr-x. root root unconfined_u:object_r:glusterd_var_run_t:s0 glusterd.socket

[root@dhcp42-246 run]# /etc/init.d/glusterd status
glusterd (pid  5278) is running...
#####

-Prasanth

Comment 5 Prasanth 2015-06-12 06:56:41 UTC
Anand,

If you are sure that this BZ is also fixed by the patch provided in Bug 1224639, you may have to move this BZ to ON_QA with the FIV for further QE verification.

Thanks,
Prasanth

Comment 6 Anand Nekkunti 2015-06-12 07:12:28 UTC
Yes ... This patch https://code.engineering.redhat.com/gerrit/#/c/49604/ fixes the this bug. Fix is available in next build.

Comment 8 Prasanth 2015-06-17 05:40:27 UTC
Verified as fixed in glusterfs-3.7.1-3.el6rhs.x86_64

Comment 9 errata-xmlrpc 2015-07-29 04:41:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.