Bug 1337275
Summary: | [RH Ceph 2.0 / 10.2.1-3.el7cp ] ceph asok denials | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasu Kulkarni <vakulkar> |
Component: | Build | Assignee: | Boris Ranto <branto> |
Status: | CLOSED ERRATA | QA Contact: | Vasu Kulkarni <vakulkar> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2.0 | CC: | hnallurv, kdreyer, vakulkar |
Target Milestone: | rc | ||
Target Release: | 2.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-10.2.1-7.el7cp | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-08-23 19:39:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vasu Kulkarni
2016-05-18 16:53:07 UTC
This is actually on latest build: ceph version 10.2.1-3.el7cp (f6e1bde2840e1da621601bad87e15fd3f654c01e) Are you 100% sure this was the latest (-3) build. Can you hit it on upstream master as well. These should be fixed in latest rhceph 2, latest jewel and latest master. Anyway, this might give me some basic info about the machine in this state: * rpm -q ceph-selinux * ls -lZ /var/run/ /va/run/ceph Does it help if you reinstall the ceph-selinux package? Can you post the output of the previous commands after the reinstall? Boris, This is on latest -3 build, you can check that in logs as well and grep for version, Anyways I will get both the info you requested in next run. At this point we need more information: what exactly is Teuthology doing with the admin socket that triggers SELinux denials? So just to explain how the selinux denials process works, it really is not dependent on any particular test in ceph-qa-suite. 1) teuthology runs a test in ceph-qa-suite, the test can setup cluster using ceph task, ceph-deploy or ceph-ansible, (ceph-deploy and ceph-ansible will setup the right context and very important to uncover issues related to selinux) 2) run a ceph test which could be anything based on suite, in the case the tests are rbd, fio and rados tests, probably here the asok denial was reported just after the ceph-ansible sets up the cluster 3) at the end of the test, there is a scan of audit logs for unknown denials, there are few known denials(not related to ceph) which are masked and never reported, If I see anything related to ceph, ceph-mon, ceph-osd, ceph-mds that usually means an issue with selinux policy. For any unknown failure seen the test would eventually fail even though the upper test might have passed(rbd/rados/cephfs etc) Nevermind, I managed to reproduce, I'll try to come up with a fix tomorrow. FWIW: This should fix on its own after the first reboot and it seems to be related to some exclude auto-magic when running fixfiles. I created an upstream PR that tries to fix it along with few more warnings that are seen on package uninstall: https://github.com/ceph/ceph/pull/9218 Verified in 10.2.2 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1755.html |