Bug 1330279 - [RH Ceph 1.3.2Async / 0.94.5-12.el7cp] Few selinux denials from ceph-mon / ceph-osd
Summary: [RH Ceph 1.3.2Async / 0.94.5-12.el7cp] Few selinux denials from ceph-mon / ce...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Build
Version: 1.3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 1.3.3
Assignee: Boris Ranto
QA Contact: Vasu Kulkarni
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1372735
TreeView+ depends on / blocked
 
Reported: 2016-04-25 18:56 UTC by Vasu Kulkarni
Modified: 2018-04-15 23:55 UTC (History)
6 users (show)

Fixed In Version: ceph-0.94.7-2.el7cp
Doc Type: Bug Fix
Doc Text:
.SELinux no longer prevents "ceph-mon" and "ceph-osd" from accessing /var/lock/ and /run/lock/ Due to insufficient SELinux policy rules, SELinux denied the `ceph-mon` and `ceph-osd` daemons to access files in the `/var/lock/` and `/run/lock/` directories. With this update, SELinux no longer prevents `ceph-mon` and `ceph-osd` from accessing `/var/lock/` and `/run/lock/`.
Clone Of:
: 1333398 (view as bug list)
Environment:
Last Closed: 2016-09-29 12:57:45 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1360444 None None None Never
Red Hat Product Errata RHSA-2016:1972 normal SHIPPED_LIVE Moderate: Red Hat Ceph Storage 1.3.3 security, bug fix, and enhancement update 2016-09-29 16:51:21 UTC

Internal Links: 1360444

Description Vasu Kulkarni 2016-04-25 18:56:28 UTC
Description of problem:

Not sure if this is major but remember seeing them during 1.3.2 as well

  duration: 1057.6275560855865, failure_reason: 'SELinux denials found on ubuntu@magna089.ceph.redhat.com:
    [''type=AVC msg=audit(1461497097.439:3917): avc:  denied  { create } for  pid=22227
    comm="ceph-osd" name="ceph-osd.2.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0
    tclass=sock_file'', ''type=AVC msg=audit(1461497049.218:3859): avc:  denied  {
    create } for  pid=17997 comm="ceph-osd" name="ceph-osd.0.asok" scontext=system_u:system_r:ceph_t:s0
    tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file'', ''type=AVC msg=audit(1461497371.269:3965):
    avc:  denied  { unlink } for  pid=15855 comm="ceph-mon" name="mon.magna089.pid"
    dev="tmpfs" ino=69298 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0
    tclass=file'', ''type=AVC msg=audit(1461497014.884:3763): avc:  denied  { create
    } for  pid=15855 comm="ceph-mon" name="ceph-mon.magna089.asok" scontext=system_u:system_r:ceph_t:s0
    tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file'', ''type=AVC msg=audit(1461497371.269:3964):
    avc:  denied  { read } for  pid=15855 comm="ceph-mon" name="mon.magna089.pid"
    dev="tmpfs" ino=69298 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0
    tclass=file'', ''type=AVC msg=audit(1461497371.266:3963): avc:  denied  { unlink
    } for  pid=15855 comm="ceph-mon" name="ceph-mon.magna089.asok" dev="tmpfs" ino=69301
    scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file'',
    ''type=AVC msg=audit(1461497073.364:3885): avc:  denied  { create } for  pid=19985
    comm="ceph-osd" name="ceph-osd.1.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0
    tclass=sock_file'']', owner: scheduled_vasu@magna002, status: fail, success: false}

Version-Release number of selected component (if applicable):
ceph version 0.94.5-12.el7cp (b08a982b961058eae6ee7c6a0efd2666d0bb4b1a)

How reproducible:
1/1


Actual results:
no denials from ceph-mon / ceph-osd

Additional info:

http://magna002.ceph.redhat.com/vasu-2016-04-23_23:09:33-smoke-hammer---basic-magna/229125/teuthology.log

Comment 2 Boris Ranto 2016-04-26 12:30:00 UTC
If I did not miss any of the denials, they all refer to .pid or .asok files. We have recently fixed this upstream with:

https://github.com/ceph/ceph/commit/5cd4ce517c2b1c930785f614cbeff661d7ca2624

We should backport that patch downstream in order to fix it (in 1.3.z, this is a simple packaging change). We should also nominate this for 1.3.z and probably even ASYNC since AFAICR it made rbd-backed VMs misbehave.

btw: There is no need to nominate this for ceph-2. It already does have that fix in it.

Comment 3 RHEL Program Management 2016-04-28 03:15:28 UTC
Product Management has reviewed and declined this request.
You may appeal this decision by reopening this request.

Comment 4 Boris Ranto 2016-04-29 05:49:39 UTC
Can I get the output of

* ls -ldZ /var/run/ceph
* ls -lZ /var/run/ceph/

ideally while the ceph mon/osd daemons are running?

Comment 5 Harish NV Rao 2016-04-29 06:11:46 UTC
Need clarity. Will this bug be fixed in 1.3.2async?

Comment 7 Boris Ranto 2016-04-29 07:29:09 UTC
@Vasu: Any details on the underlying distro? I think I can see where the problem lies. Starting with RHEL 7.3, the 'semodule -l' command -- which we use to detect the ceph policy version -- no longer outputs the versions of the policy modules. This means that on 7.3 code base, we will never relabel the ceph directories and files so /var/run/ceph ends up using var_run_t instead of ceph_var_run_t context. It should work fine on 7.2 code base, though.

Comment 8 Vasu Kulkarni 2016-05-03 04:26:14 UTC
Boris,

Below is the info you need

[ubuntu@clara012 ~]$ ls -ldZ /var/run/ceph/
drwxr-xr-x. root root unconfined_u:object_r:var_run_t:s0 /var/run/ceph/
[ubuntu@clara012 ~]$ ls -lZ /var/run/ceph/
srwxr-xr-x. root root system_u:object_r:var_run_t:s0   ceph-mon.clara012.asok
srwxr-xr-x. root root system_u:object_r:var_run_t:s0   ceph-osd.3.asok
srwxr-xr-x. root root system_u:object_r:var_run_t:s0   ceph-osd.4.asok
srwxr-xr-x. root root system_u:object_r:var_run_t:s0   ceph-osd.5.asok
-rw-r--r--. root root system_u:object_r:var_run_t:s0   mon.clara012.pid
-rw-r--r--. root root system_u:object_r:var_run_t:s0   osd.3.pid
-rw-r--r--. root root system_u:object_r:var_run_t:s0   osd.4.pid
-rw-r--r--. root root system_u:object_r:var_run_t:s0   osd.5.pid

[ubuntu@clara012 ~]$ ls -ldZ /var/lib/ceph/
drwxr-xr-x. root root system_u:object_r:ceph_var_lib_t:s0 /var/lib/ceph/

[ubuntu@clara012 ~]$ ls -lZ /var/lib/ceph/
drwxr-xr-x. root root unconfined_u:object_r:ceph_var_lib_t:s0 bootstrap-mds
drwxr-xr-x. root root system_u:object_r:ceph_var_lib_t:s0 bootstrap-osd
drwxr-xr-x. root root unconfined_u:object_r:ceph_var_lib_t:s0 bootstrap-rgw
drwxr-xr-x. root root system_u:object_r:ceph_var_lib_t:s0 mon
drwxr-xr-x. root root system_u:object_r:ceph_var_lib_t:s0 osd
drwxr-xr-x. root root system_u:object_r:ceph_var_lib_t:s0 tmp

[ubuntu@clara012 ~]$ ls -ldZ /etc/ceph/
drwxr-xr-x. root root system_u:object_r:etc_t:s0       /etc/ceph/
[ubuntu@clara012 ~]$ ls -lZ /etc/ceph/
-rw-------. root root unconfined_u:object_r:etc_runtime_t:s0 ceph.client.admin.keyring
-rw-r--r--. root root unconfined_u:object_r:etc_t:s0   ceph.conf
-rwxr-xr-x. root root system_u:object_r:etc_t:s0       rbdmap
-rw-------. root root unconfined_u:object_r:etc_t:s0   tmpGbFlud

[ubuntu@clara012 ~]$ ls -ldZ /var/log/ceph/
drwxr-xr-x. root root system_u:object_r:ceph_log_t:s0  /var/log/ceph/
[ubuntu@clara012 ~]$ ls -lZ /var/log/ceph/
-rw-------. root root unconfined_u:object_r:ceph_log_t:s0 ceph.audit.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.15502.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.15505.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.15508.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.15758.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.15789.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.15820.log.gz
-rw-r--r--. root root unconfined_u:object_r:ceph_log_t:s0 ceph-client.admin.16737.log.gz

Comment 9 Boris Ranto 2016-05-03 09:23:57 UTC
@Vasu: I also need to know what the underlying distro is. The output of these few commands

* uname -a
* cat /etc/redhat-release
* rpm -q kernel
* rpm -q selinux-policy-targeted
* semodule -l | grep ceph # here, I need to know whether there is a version string after the 'ceph' string or not, the ceph-selinux must be installed when you run the command; if it is not, run 'semodule -l |tail', I'm looking for the version string there as well

should tell me whether you are hitting the same issue I am hitting. If so, we should make sure that we fix it before rhel 7.3 is released.

Comment 10 Vasu Kulkarni 2016-05-03 16:42:22 UTC
Boris,

Sorry missed that, I have no system in 7.3 yet, mostly 7.1 and 7.2 but I believe the selinux-policy-targeted is updated to latest advisory due to some other selinux issue we hit recently


[ubuntu@clara012 ~]$ uname -a
Linux clara012 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux
[ubuntu@clara012 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.2 (Maipo)
[ubuntu@clara012 ~]$ rpm -q kernel
kernel-3.10.0-327.el7.x86_64
[ubuntu@clara012 ~]$ rpm -q selinux-policy-targeted
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch
[ubuntu@clara012 ~]$ semodule -l
semodule: SELinux policy is not managed or store cannot be accessed.
[ubuntu@clara012 ~]$ semodule -l | tail
semodule: SELinux policy is not managed or store cannot be accessed.
[ubuntu@clara012 ~]$ sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28

Comment 11 Boris Ranto 2016-05-03 17:07:36 UTC
The semodule command needs to be run as root (or with sudo) so that it could access the store. Can you please attach the output when run as root?

Comment 12 Vasu Kulkarni 2016-05-03 17:40:40 UTC
Here is the info, installed from 1.3.2 Async

[ubuntu@clara012 yum.repos.d]$ sudo semodule -l | tail
wine	1.11.0	
wireshark	2.4.0	
xen	1.13.0	
xguest	1.2.0	
xserver	3.9.4	
zabbix	1.6.0	
zarafa	1.2.0	
zebra	1.13.0	
zoneminder	1.0.0	
zosremote	1.2.0	
[ubuntu@clara012 yum.repos.d]$ sudo semodule -l | grep ceph
ceph	1.1.1

Comment 13 Boris Ranto 2016-05-04 07:42:56 UTC
Hmm, ok, this is a different issue than the one I was referring to. However, I think I can reproduce it. It looks like this is caused by a combination of

mkdir -p /var/run/ceph

in %post script and ghosting of the directory in the file list. The mkdir call is unconfined and therefore it sets an incorrect context for the file (files created by uncofined processes inherit context from the parent diretory). We should do what upstream does in that case -- use systemd-tmpfiles to create the files. The systemd binary is confined and therefore it sets the correct SELinux context for the directory. I'd nominate the fix for the (next) async update along with the other issue I was hitting that breaks the SELinux compatibility on 7.3.


This downstream patch should help, here:

http://pkgs.devel.redhat.com/cgit/rpms/ceph/commit/?h=private-branto-1330279&id=1ec4b76d0f5964fc14478d3a4843d87ae4ae99d6


This upstream patch should be back-ported to fix the contexts on RHEL 7.3+:

https://github.com/ceph/ceph/pull/8923

Comment 14 Vasu Kulkarni 2016-07-26 18:39:59 UTC
Boris,

Do you want to backport this to 1.3.2 Async?

Comment 22 errata-xmlrpc 2016-09-29 12:57:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-1972.html


Note You need to log in before you can comment on or make changes to this bug.