Bug 1337275 - [RH Ceph 2.0 / 10.2.1-3.el7cp ] ceph asok denials
Summary: [RH Ceph 2.0 / 10.2.1-3.el7cp ] ceph asok denials
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Build
Version: 2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 2.0
Assignee: Boris Ranto
QA Contact: Vasu Kulkarni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-18 16:53 UTC by Vasu Kulkarni
Modified: 2022-02-21 18:03 UTC (History)
3 users (show)

Fixed In Version: ceph-10.2.1-7.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:39:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1755 0 normal SHIPPED_LIVE Red Hat Ceph Storage 2.0 bug fix and enhancement update 2016-08-23 23:23:52 UTC

Description Vasu Kulkarni 2016-05-18 16:53:07 UTC
Description of problem:

Setup cluster with ansible(1.0.5-13.el7scon) and ceph version 10.2.1-1, tests runs using, while running various tests I see the following denials

'type=AVC msg=audit(1463545601.654:7116): avc: denied { create } for pid=13665 comm="ceph-mon" name="ceph-mon.clara011.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file',

 'type=AVC msg=audit(1463545607.290:7318): avc: denied { unlink } for pid=13665 comm="ceph-mon" name="ceph-mon.clara011.asok" dev="tmpfs" ino=154819 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463547182.460:7422): avc: denied { unlink } for pid=14944 comm="ceph-mon" name="ceph-mon.clara011.asok" dev="tmpfs" ino=160011 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file'

'type=AVC msg=audit(1463546782.258:4662): avc: denied { create } for pid=19212 comm="ceph-osd" name="ceph-osd.5.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546784.124:4680): avc: denied { create } for pid=19409 comm="ceph-osd" name="ceph-osd.5.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463547033.180:5018): avc: denied { unlink } for pid=19409 comm="ceph-osd" name="ceph-osd.5.asok" dev="tmpfs" ino=106331 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546583.074:4585): avc: denied { unlink } for pid=18204 comm="ceph-osd" name="ceph-osd.3.asok" dev="tmpfs" ino=101137 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546781.429:4654): avc: denied { unlink } for pid=19128 comm="ceph-osd" name="ceph-osd.5.asok" dev="tmpfs" ino=104264 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546583.069:4584): avc: denied { create } for pid=18204 comm="ceph-osd" name="ceph-osd.3.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546087.361:4488): avc: denied { unlink } for pid=17009 comm="ceph-osd" name="ceph-osd.1.asok" dev="tmpfs" ino=97859 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file',

 'type=AVC msg=audit(1463546585.911:4608): avc: denied { create } for pid=18473 comm="ceph-osd" name="ceph-osd.3.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546781.424:4653): avc: denied { create } for pid=19128 comm="ceph-osd" name="ceph-osd.5.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546087.355:4487): avc: denied { create } for pid=17009 comm="ceph-osd" name="ceph-osd.1.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546090.121:4510): avc: denied { create } for pid=17269 comm="ceph-osd" name="ceph-osd.1.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463546782.271:4663): avc: denied { unlink } for pid=19212 comm="ceph-osd" name="ceph-osd.5.asok" dev="tmpfs" ino=108040 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file'

['type=AVC msg=audit(1463549696.603:3098): avc: denied { create } for pid=14300 comm="ceph-mds" name="ceph-mds.clara006.asok" scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file', 

'type=AVC msg=audit(1463550659.354:3137): avc: denied { unlink } for pid=14300 comm="ceph-mds" name="ceph-mds.clara006.asok" dev="tmpfs" ino=69474 scontext=system_u:system_r:ceph_t:s0 tcontext=system_u:object_r:var_run_t:s0 tclass=sock_file'

more logs:
http://pulpito.ceph.redhat.com/vasu-2016-05-18_00:15:32-smoke:ceph-ansible-jewel---basic-clara/

Comment 2 Vasu Kulkarni 2016-05-18 20:33:33 UTC
This is actually on latest build: ceph version 10.2.1-3.el7cp (f6e1bde2840e1da621601bad87e15fd3f654c01e)

Comment 3 Boris Ranto 2016-05-19 07:52:44 UTC
Are you 100% sure this was the latest (-3) build. Can you hit it on upstream master as well. These should be fixed in latest rhceph 2, latest jewel and latest master.

Anyway, this might give me some basic info about the machine in this state:

* rpm -q ceph-selinux
* ls -lZ /var/run/ /va/run/ceph

Does it help if you reinstall the ceph-selinux package? Can you post the output of the previous commands after the reinstall?

Comment 4 Vasu Kulkarni 2016-05-19 19:58:58 UTC
Boris,

This is on latest -3 build, you can check that in logs as well and grep for version, Anyways I will get both the info you requested in next run.

Comment 6 Ken Dreyer (Red Hat) 2016-05-19 20:07:50 UTC
At this point we need more information: what exactly is Teuthology doing with the admin socket that triggers SELinux denials?

Comment 7 Vasu Kulkarni 2016-05-19 20:46:34 UTC
So just to explain how the selinux denials process works, it really  is not dependent on any particular test in ceph-qa-suite.

1) teuthology runs a test in ceph-qa-suite, the test can setup cluster using ceph task, ceph-deploy or ceph-ansible, (ceph-deploy and ceph-ansible will setup the right context and very important to uncover issues related to selinux)

2) run a ceph test which could be anything based on suite, in the case the tests are rbd, fio and rados tests, probably here the asok denial was reported just after the ceph-ansible sets up the cluster

3) at the end of the test, there is a scan of audit logs for unknown denials, there are few known denials(not related to ceph) which are masked and never reported, If I see anything related to ceph, ceph-mon, ceph-osd, ceph-mds that usually means an issue with selinux policy. For any unknown failure seen the test would eventually fail even though the upper test might have passed(rbd/rados/cephfs etc)

Comment 8 Boris Ranto 2016-05-19 20:59:29 UTC
Nevermind, I managed to reproduce, I'll try to come up with a fix tomorrow.

Comment 9 Boris Ranto 2016-05-23 15:19:12 UTC
FWIW: This should fix on its own after the first reboot and it seems to be related to some exclude auto-magic when running fixfiles. I created an upstream PR that tries to fix it along with few more warnings that are seen on package uninstall:

https://github.com/ceph/ceph/pull/9218

Comment 12 Vasu Kulkarni 2016-06-21 22:02:52 UTC
Verified in 10.2.2

Comment 14 errata-xmlrpc 2016-08-23 19:39:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1755.html


Note You need to log in before you can comment on or make changes to this bug.