Bug 1354488 - Multiple SElinux alerts
Summary: Multiple SElinux alerts
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Build
Version: 2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 2.0
Assignee: Boris Ranto
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-11 12:22 UTC by Emilien Macchi
Modified: 2022-02-21 18:03 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-07-12 16:04:01 UTC
Embargoed:


Attachments (Terms of Use)

Description Emilien Macchi 2016-07-11 12:22:30 UTC
Description of problem:
Found 26 alerts in /var/log/audit/audit.log when deploying Ceph and OpenStack.

Version-Release number of selected component (if applicable):
ceph-selinux-10.2.2-0.el7.x86_64

Logs are available here:
http://logs.openstack.org/69/340069/1/check/gate-puppet-openstack-integration-3-scenario001-tempest-centos-7/039ef95/console.html#_2016-07-09_21_22_16_348053

The complete list of AVCs:
http://paste.openstack.org/show/8cj29aJdevufwouLzqop/

Comment 2 Ken Dreyer (Red Hat) 2016-07-11 14:57:34 UTC
"ceph-selinux-10.2.2-0.el7" looks like an upstream version number, not a Red Hat Ceph Storage version number...

Comment 3 Emilien Macchi 2016-07-11 15:08:51 UTC
Right, I deployed Jewel, provided in CentOS Storage SIG repository.

Comment 4 Boris Ranto 2016-07-12 11:53:32 UTC
These are all var_t target contexts, they should probably be labelled with some ceph_<something>_t label. Can you paste the ceph.conf?

Also, I can see some /srv/data/... paths in the logs. These do not seem as the default ceph locations. What are these files?

Comment 5 Emilien Macchi 2016-07-12 11:55:35 UTC
We're using puppet-ceph to deploy Ceph.
The manifest is here, in this CI tools repository:
https://github.com/openstack/puppet-openstack-integration/blob/master/manifests/ceph.pp#L11-L44

The manifest is something we can easily changed, it's only used in CI.
The actual module is here: https://github.com/openstack/puppet-ceph

Feel free to give any feedback at our way to deploy. Also submit a patch in our CI if needed.

Comment 6 Ken Dreyer (Red Hat) 2016-07-12 16:04:01 UTC
Since this test is not using the ceph RPMs from the Red Hat Ceph Storage product, I'm going to close this BZ and request that you please file tickets with Ceph upstream for now: http://tracker.ceph.com/projects/devops/issues/new

In the Redmine ticket, it would be good to mention exactly where you got the ceph-10.2.2-0 RPMs (centos.org, not ceph.com)

Comment 7 Emilien Macchi 2016-07-12 16:22:09 UTC
I created an account on http://tracker.ceph.com/projects/ceph and I can't create any ticket. My account ID is "emacchi". I would be grateful if you could help me to solve this bug.

Thanks

Comment 8 Boris Ranto 2016-07-13 09:49:38 UTC
@Emilien: Hmm, the line #42: '/srv/data' => {} seems quite suspicious. Any idea what does it define? Anyway, it would probably help if you stored the files elsewhere. Depending on the type of files it covers, this could be somewhere under /var/lib/ceph, /var/log/ceph or even /var/run/ceph (or maybe even somewhere under /tmp?).

Comment 9 Emilien Macchi 2016-07-13 12:12:56 UTC
ok I tried to push a patch to change the dir to /var/lib/ceph/data. Let's see how it works now.

Comment 10 Ken Dreyer (Red Hat) 2016-07-13 13:30:11 UTC
You're account should be active in Redmine now, Emilien. If you have questions, please ask zackc in IRC (#sepia channel in OFTC)

Comment 11 Emilien Macchi 2016-07-13 13:44:24 UTC
indeed, using /var/lib/ceph reduced the SElinux alerts to 1. I'll file a bug in Ceph tracker.

Comment 12 Emilien Macchi 2016-07-13 13:50:07 UTC
And here's the upstream bug: http://tracker.ceph.com/issues/16674


Note You need to log in before you can comment on or make changes to this bug.