Bug 1222869 - [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6
Summary: [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rh...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: packaging
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Anand Nekkunti
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1210404 1212796 glusterfs-3.7.1 1223185 1228109
TreeView+ depends on / blocked
 
Reported: 2015-05-19 10:35 UTC by Anand Nekkunti
Modified: 2016-01-04 04:50 UTC (History)
15 users (show)

Fixed In Version: glusterfs-3.7.1
Doc Type: Bug Fix
Doc Text:
Clone Of: 1210404
: 1223185 (view as bug list)
Environment:
Last Closed: 2015-06-02 08:03:00 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Anand Nekkunti 2015-05-19 10:35:38 UTC
+++ This bug was initially created as a clone of Bug #1210404 +++

Description of problem:
Selinux throws a AVC errors while running DHT automated test cases

Info: Searching AVC errors produced since 1428537291.96 (Thu Apr  9 05:24:51 2015)
Searching logs...
Running '/usr/bin/env LC_ALL=en_US.UTF-8 /sbin/ausearch -m AVC -m USER_AVC -m SELINUX_ERR -ts 04/09/2015 05:24:51 < /dev/null >/mnt/testarea/tmp.rhts-db-submit-result.03LjWL 2>&1'
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.822:73): arch=c000003e syscall=42 success=no exit=-111 a0=c a1=7fff14694310 a2=6e a3=7fb7a2c45673 items=0 ppid=22549 pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.822:73): avc:  denied  { write } for  pid=22550 comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.823:74): arch=c000003e syscall=87 success=yes exit=0 a0=7fff14694312 a1=7fff14694310 a2=6f a3=7fb7a2c45673 items=0 ppid=22549 pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.823:74): avc:  denied  { unlink } for  pid=22550 comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
Fail: AVC messages found.

Version-Release number of selected component (if applicable):
Upstream glusterfs3.7 on Rhel7.1 server

How reproducible: 
Always when we run BVT with upstream glusterfs3.7

Steps to Reproduce:
1. Not a manual process
2. Running DHT BVT on rhel7.1 server with upstream glusterfs3.7 packages

Actual results:
Selinux AVC errors found

Expected results:
No AVC errors


Additional info:
The BVT test result link, that has all the avc logs:
https://beaker.engineering.redhat.com/jobs/925560

--- Additional comment from Apeksha on 2015-04-10 02:20:39 EDT ---

sosreports and avc logs attached to following location:

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1210404/

--- Additional comment from Niels de Vos on 2015-04-14 08:28:49 EDT ---

Apeksha, could you please write a public comment about this problem? This is a Gluster Community bug and members of the community can not see any details here.

Thanks!

--- Additional comment from Apeksha on 2015-04-15 02:05:41 EDT ---

Description of problem:
Selinux throws a AVC errors while running DHT automated test cases

Info: Searching AVC errors produced since 1428537291.96 (Thu Apr  9 05:24:51 2015)
Searching logs...
Running '/usr/bin/env LC_ALL=en_US.UTF-8 /sbin/ausearch -m AVC -m USER_AVC -m SELINUX_ERR -ts 04/09/2015 05:24:51 < /dev/null >/mnt/testarea/tmp.rhts-db-submit-result.03LjWL 2>&1'
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.822:73): arch=c000003e syscall=42 success=no exit=-111 a0=c a1=7fff14694310 a2=6e a3=7fb7a2c45673 items=0 ppid=22549 pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.822:73): avc:  denied  { write } for  pid=22550 comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.823:74): arch=c000003e syscall=87 success=yes exit=0 a0=7fff14694312 a1=7fff14694310 a2=6f a3=7fb7a2c45673 items=0 ppid=22549 pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd" subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.823:74): avc:  denied  { unlink } for  pid=22550 comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731 scontext=unconfined_u:system_r:glusterd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
Fail: AVC messages found.

Version-Release number of selected component (if applicable):
Upstream glusterfs3.7 on Rhel7.1 server

How reproducible: 
Always when we run BVT with upstream glusterfs3.7

Steps to Reproduce:
1. Not a manual process
2. Running DHT BVT on rhel7.1 server with upstream glusterfs3.7 packages

Actual results:
Selinux AVC errors found

Expected results:
No AVC errors

--- Additional comment from Niels de Vos on 2015-04-15 12:02:09 EDT ---

This shows that glusterd is deleting a glusterd.sock which has not the right SElinux context:

avc:  denied  { unlink } for  pid=22550 comm="glusterd" name="glusterd.socket"
    scontext=unconfined_u:system_r:glusterd_t:s0 
    tcontext=unconfined_u:object_r:var_run_t:s0

My guess is that glusterd creates a /var/run/glusterd.sock socket (from the rpm scriptlet, like bug 1162125?), and that deleting that socket fails. This might be a selinux-policy issue, but maybe the path of the socket changed upstream?

--- Additional comment from Stanislav Graf on 2015-04-20 09:16:22 EDT ---

CCing mmalik, mgrepl and lvrabec

--- Additional comment from Milos Malik on 2015-04-20 11:30:24 EDT ---

Does following command help?

# restorecon -Rv /var/run/gluster*

Correct label for the socket is:

# matchpathcon /var/run/glusterd.sock
/var/run/glusterd.sock	system_u:object_r:glusterd_var_run_t:s0
#

--- Additional comment from Stanislav Graf on 2015-04-21 15:16:21 EDT ---

Apeksha can you check? Feel free to ping me on IRC to coordinate if needed.

--- Additional comment from Stanislav Graf on 2015-04-27 16:20:55 EDT ---



--- Additional comment from Stanislav Graf on 2015-04-27 16:22:13 EDT ---



--- Additional comment from Stanislav Graf on 2015-04-27 16:25:29 EDT ---

(In reply to Milos Malik from comment #6)

Original job
------------
https://beaker.engineering.redhat.com/jobs/925560
AVCs in attachment 1019427 [details]

Job with restorecon
-------------------
https://beaker.engineering.redhat.com/jobs/939894
AVCs in attachment 1019428 [details]

I'll ping you tomorrow to sync-up.

--- Additional comment from Stanislav Graf on 2015-04-29 07:04:05 EDT ---

(In reply to Stanislav Graf from comment #10)

Rerunning BVT with some changes, will gather AVCs once it's done and post results here.

--- Additional comment from Stanislav Graf on 2015-04-30 04:15:20 EDT ---



--- Additional comment from Stanislav Graf on 2015-04-30 04:18:08 EDT ---

(In reply to Stanislav Graf from comment #10)

https://beaker.engineering.redhat.com/jobs/943702

After installation we called:
# semanage fcontext -a -f '' -t bin_t '/var/lib/glusterd/hooks/(.*)/(.*)/(.*)/.*\.sh'
# restorecon -Rv /var/run/gluster* (comment 6)
# restorecon -Rv /var/lib/glusterd

AVCs in attachment 1020434 [details]

--- Additional comment from Stanislav Graf on 2015-05-03 12:29:28 EDT ---



--- Additional comment from Stanislav Graf on 2015-05-03 12:30:40 EDT ---

(In reply to Stanislav Graf from comment #13)

function selinux_workaround
{
    yum list installed policycoreutils-python || yum -y install policycoreutils-python

    ls -Z /var/run/gluster*
    ls -Z /var/lib/glusterd

cat > mypolicy.te <<_EOPOLICY_
policy_module(mypolicy, 1.0)
require {
  type glusterd_t;
  class capability { mknod sys_ptrace };
}
corenet_tcp_connect_portmap_port(glusterd_t)
files_manage_isid_type_blk_files(glusterd_t)
files_manage_isid_type_chr_files(glusterd_t)
samba_domtrans_smbd(glusterd_t)
samba_signal_smbd(glusterd_t)
allow glusterd_t glusterd_t:capability { mknod sys_ptrace };
fstools_domtrans(glusterd_t)
_EOPOLICY_

    make -f /usr/share/selinux/devel/Makefile
    semodule -i mypolicy.pp

    semanage fcontext -a -f '' -t bin_t '/var/lib/glusterd/hooks/(.*)/(.*)/(.*)/.*\.sh'

    restorecon -Rv /var/run/gluster*
    restorecon -Rv /var/lib/glusterd
    chcon -t fsadm_exec_t /usr/sbin/xfs_growfs

    matchpathcon /var/run/glusterd.sock
}

https://beaker.engineering.redhat.com/jobs/945703
AVCs in attachment 1021372 [details]

--- Additional comment from Miroslav Grepl on 2015-05-15 05:11:27 EDT ---

We really need to run

restorecon /var/run/glusterd.sock

wherethis socket is created for the first time. rpm scriptlet?

We are not able to get in working on RHEL6 without filename transition rule which we have in RHEL7.

--- Additional comment from Kaushal on 2015-05-18 03:27:47 EDT ---

(In reply to Miroslav Grepl from comment #16)
> We really need to run
> 
> restorecon /var/run/glusterd.sock
> 
> wherethis socket is created for the first time. rpm scriptlet?
> 
> We are not able to get in working on RHEL6 without filename transition rule
> which we have in RHEL7.

GlusterD itself creates this file if it doesn't exist. glusterd will be run as a part of rpm post upgrade, so the file would get created then. Would this be useful?

I was under the impression that if a path had a context defined in a loaded policy, the kernel would ensure that the context was applied when the file was created. From what I understand, the policy defines the context for a regex path under /var/run/gluster*, but rhel-6.6 doesn't work with this correctly. Did I understand this correctly? If so, wouldn't it be just enough that we add an entry for the exact path in the policy?

--- Additional comment from Milos Malik on 2015-05-18 03:44:17 EDT ---

The policy says that any file, directory or socket created under /var/run by any process running as glusterd_t will get glusterd_var_run_t label.

# sesearch -s glusterd_t -t var_run_t -T
Found 3 semantic te rules:
   type_transition glusterd_t var_run_t : file glusterd_var_run_t; 
   type_transition glusterd_t var_run_t : dir glusterd_var_run_t; 
   type_transition glusterd_t var_run_t : sock_file glusterd_var_run_t; 

#

But glusterd runs as rpm_script_t when it's executed from the rpm scriptlet.

# sesearch -s rpm_script_t -t glusterd_exec_t -T

# sesearch -s rpm_t -t glusterd_exec_t -T

# 

You need to run restorecon or setfiles to apply a label based on fcontext patterns.

# semanage fcontext -l | grep /var/run/gluster
/var/run/gluster(/.*)?                             all files          system_u:object_r:glusterd_var_run_t:s0 
/var/run/glusterd.*                                regular file       system_u:object_r:glusterd_var_run_t:s0 
/var/run/glusterd.*                                socket             system_u:object_r:glusterd_var_run_t:s0 
#

--- Additional comment from Kaushal on 2015-05-18 06:10:44 EDT ---

So running a restorecon on /var/run/gluster* at the end of the post upgrade scriptlet will solve this?

Also, just for my understanding, could you explain why is this a problem with rhel-6 only and not rhel-7?

--- Additional comment from Milos Malik on 2015-05-18 06:45:15 EDT ---

Yes.

We can specify a filename inside transition rules in RHEL-7:

# sesearch -s rpm_script_t -t var_run_t -c sock_file -T

Found 3 named file transition filename_trans:
type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t "glusterd.socket"; 
type_transition rpm_script_t var_run_t : sock_file rpcbind_var_run_t "rpcbind.sock"; 
type_transition rpm_script_t var_run_t : sock_file docker_var_run_t "docker.sock"; 

#

But we cannot specify a filename inside transition rules in RHEL-6. If there was a transition rule like the following in RHEL-6, all sockets in /var/run created by any RPM scriptlet would have been labeled glusterd_var_run_t, which is wrong, because there are various socket file inside /var/run which have no relation to gluster.

   type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t;

--- Additional comment from Anand Nekkunti on 2015-05-19 02:59:50 EDT ---

(In reply to Milos Malik from comment #20)
> Yes.
> 
> We can specify a filename inside transition rules in RHEL-7:
> 
> # sesearch -s rpm_script_t -t var_run_t -c sock_file -T
> 
> Found 3 named file transition filename_trans:
> type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t
> "glusterd.socket"; 
> type_transition rpm_script_t var_run_t : sock_file rpcbind_var_run_t
> "rpcbind.sock"; 
> type_transition rpm_script_t var_run_t : sock_file docker_var_run_t
> "docker.sock"; 
> 
> #
> 
> But we cannot specify a filename inside transition rules in RHEL-6. If there
> was a transition rule like the following in RHEL-6, all sockets in /var/run
> created by any RPM scriptlet would have been labeled glusterd_var_run_t,
> which is wrong, because there are various socket file inside /var/run which
> have no relation to gluster.
> 
>    type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t;


 I think  restorecon command fail if I run during post upgrade because glusterd.socket file not exist in that time which created later by glusterd.

can i create glusterd.socket file and run restorecon during post upgrade for RHEL6 ?  is it correct ?

--- Additional comment from Anand Nekkunti on 2015-05-19 03:03:11 EDT ---

(In reply to Milos Malik from comment #20)
> Yes.
> 
> We can specify a filename inside transition rules in RHEL-7:
> 
> # sesearch -s rpm_script_t -t var_run_t -c sock_file -T
> 
> Found 3 named file transition filename_trans:
> type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t
> "glusterd.socket"; 
> type_transition rpm_script_t var_run_t : sock_file rpcbind_var_run_t
> "rpcbind.sock"; 
> type_transition rpm_script_t var_run_t : sock_file docker_var_run_t
> "docker.sock"; 
> 
> #
> 
> But we cannot specify a filename inside transition rules in RHEL-6. If there
> was a transition rule like the following in RHEL-6, all sockets in /var/run
> created by any RPM scriptlet would have been labeled glusterd_var_run_t,
> which is wrong, because there are various socket file inside /var/run which
> have no relation to gluster.
> 
>    type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t;


 I think  restorecon command fail if I run during post upgrade because glusterd.socket file not exist in that time which created later by glusterd.

can i create glusterd.socket file and run restorecon during post upgrade for RHEL6 ?  is it correct ?

--- Additional comment from Anand Nekkunti on 2015-05-19 05:51:25 EDT ---

I am  running the restorecon -vR /var/run/glusterd*  command during glusterfs post upgrade , is it solve this problem ? 
I have sent the patch for that http://review.gluster.org/#/c/10815/1/glusterfs.spec.in

Comment 1 Anand Avati 2015-05-19 10:37:25 UTC
REVIEW: http://review.gluster.org/10815 (Build: Restoring selinux context for rhel6 during post run) posted (#3) for review on master by Anand Nekkunti (anekkunt@redhat.com)

Comment 2 Anand Avati 2015-05-31 09:12:06 UTC
REVIEW: http://review.gluster.org/10996 (Build: glusterd socket file cleanup to set SElinux context properly.) posted (#3) for review on release-3.7 by Atin Mukherjee (amukherj@redhat.com)

Comment 3 Anand Avati 2015-06-01 05:40:19 UTC
REVIEW: http://review.gluster.org/10996 (Build: glusterd socket file cleanup to set SElinux context properly.) posted (#4) for review on release-3.7 by Anand Nekkunti (anekkunt@redhat.com)

Comment 4 Anand Avati 2015-06-01 08:21:52 UTC
REVIEW: http://review.gluster.org/10996 (Build: glusterd socket file cleanup to set SElinux context properly.) posted (#5) for review on release-3.7 by Anand Nekkunti (anekkunt@redhat.com)

Comment 5 Anand Avati 2015-06-01 08:25:53 UTC
COMMIT: http://review.gluster.org/10996 committed in release-3.7 by Niels de Vos (ndevos@redhat.com) 
------
commit 4fda438a0aa55e2cb0412f8c800d01f81fe4bbbe
Author: anand <anekkunt@redhat.com>
Date:   Fri May 29 13:57:00 2015 +0530

    Build: glusterd socket file cleanup to set SElinux context properly.
    
    Issue : glusterd runs as rpm_script_t when it's executed from the rpm scriptlet, socket file
    created in this context is set as rpm_script_t type, glusterd unable to access socket file
    when it runs in glusterd_t context (glusted not cleaning socket file while stoping due to
    some cleanup issues, so cleanup required in rpm install).
    
    Fix: In rpm post upgrade,remove the glusterd.socket file which is created by glusterd in rpm context.
    so that glusterd recreates socket file when it runs in glusterd_t context.
    
    Backport of:
    >Change-Id: I57041e2e0f9d3e74e12198507a9bd47d09f4a132
    >BUG: 1223185
    >Signed-off-by: anand <anekkunt@redhat.com>
    >Reviewed-on: http://review.gluster.org/10815
    >Reviewed-by: Niels de Vos <ndevos@redhat.com>
    >Tested-by: NetBSD Build System
    >Tested-by: Gluster Build System <jenkins@build.gluster.com>
    
    Change-Id: Ic97458f41e1184054cbedbeb547118b217768bbd
    BUG: 1222869
    Signed-off-by: anand <anekkunt@redhat.com>
    Reviewed-on: http://review.gluster.org/10996
    Reviewed-by: Niels de Vos <ndevos@redhat.com>
    Tested-by: Niels de Vos <ndevos@redhat.com>

Comment 6 Niels de Vos 2015-06-02 08:03:00 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.1, please reopen this bug report.

glusterfs-3.7.1 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/1
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.