Bug 1315332
Summary: | Sanlock fails to acquire lock for ceph device due to SELinux denials | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Elad <ebenahar> | ||||
Component: | selinux-policy | Assignee: | Lukas Vrabec <lvrabec> | ||||
Status: | CLOSED ERRATA | QA Contact: | Milos Malik <mmalik> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 7.2 | CC: | acanan, amureini, bmcclain, derez, ebenahar, gklein, lmiksik, lvrabec, mgrepl, mjahoda, mmalik, nsoffer, plautrba, pmoore, pvrabec, snagar, ssekidde, tnisan, ylavi | ||||
Target Milestone: | rc | Keywords: | Reopened, ZStream | ||||
Target Release: | --- | Flags: | amureini:
needinfo+
|
||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | selinux-policy-3.13.1-77.el7 | Doc Type: | Bug Fix | ||||
Doc Text: |
Due to insufficient SELinux policy rules, sanlock domain was previously not able to access a CEPH file system. As a consequence, sanlock failed to acquire a lock for a CEPH device. The SELinux policy rules have been updated. As a result, the CEPH file system is now correctly labeled as cephfs_t and accessible by the sanlock domain.
|
Story Points: | --- | ||||
Clone Of: | |||||||
: | 1365640 (view as bug list) | Environment: | |||||
Last Closed: | 2016-11-04 02:44:03 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1365640 | ||||||
Attachments: |
|
Description
Elad
2016-03-07 13:43:03 UTC
Bronce, please add this to 4.0 tracking. We would want it fixed as soon as possible, since if this happens early enough we can add it to 3.6.z. How/where is name="ids" created? (In reply to Miroslav Grepl from comment #4) > How/where is name="ids" created? VDSM creates it when initializing the domain. Miroslav - can you elaborate what details exactly you need here? OK I overlooked. /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids we have still the same issue here with mislabeling of /rhev/data-center. Have you had SELinux disabled? SELinux was enforcing How did you mount /rhev/data-center ? /rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows: jsonrpc.Executor/1::DEBUG::2016-03-07 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222 /rhev/data-center/mnt/10.35.65.18:_222 (cwd None) Any updates on this? Can we get acks for this for 7.3? And mark this for 7.2.z? (In reply to Elad from comment #10) > /rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows: > > jsonrpc.Executor/1::DEBUG::2016-03-07 > 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset > --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o > name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222 > /rhev/data-center/mnt/10.35.65.18:_222 (cwd None) Ok you will need to specify a mount SELinux option to label it with "system_u:object_r:mnt_t:s0" or fix labeling after the mount. unlabeled_t indicates something has gone very wrong and needs to be dealt with by a real person. In this case, you mount ceph filesystem. How is a action like mounting ceph filesystem consider a very wrong flow that needs manual steps or unstandard mounting? This is not a one off, it recreates consistently and we need to this to work to certify CephFS. I would say that we need to create new fstype for ceph filesystem, like cephfs_t. And then we add allow rules to sanlock policy. For example: type cephfs_t; fs_type(cephfs_t) genfscon ceph / gen_context(system_u:object_r:cephfs_t,s0) This solution will increase security from SELinux point of view and we don't need to allow sanlock_t domain to read/write mnt_t files. (In reply to Yaniv Dary from comment #15) > How is a action like mounting ceph filesystem consider a very wrong flow > that needs manual steps or unstandard mounting? > This is not a one off, it recreates consistently and we need to this to work > to certify CephFS. Do you have a test machine where we could play around? Thank you. Restoring needinfo by Yaniv on Bronce that was lost by one of the comments. Bronce, can we mark this for 7.2.z? Can we get a devel ack on this? Bronce, can you help with getting a QE ack here? This is blocking a feature. Any updates? Fixed added to Fedora Rawhide and Fedora 24. After some testing will be backported to rhel-7.3. We're working on deployment of a new cephfs setup, once done I'll check the fix. (In reply to Elad from comment #37) > We're working on deployment of a new cephfs setup, once done I'll check the > fix. Elad - any updates? (In reply to Yaniv Kaul from comment #38) > (In reply to Elad from comment #37) > > We're working on deployment of a new cephfs setup, once done I'll check the > > fix. > > Elad - any updates? Not yet, we are waiting for builds from Ceph team. We will verify as soon as we get it. We have an operational ceph setup with cephfs deployed on both the ceph servers and on the client (RHEL hypervisor). I'm trying to install the provided SELinux rpms in comment #36 but it fails with many deps issues of many packages. Will you be able to provide us a host with these SELinux packages installed? Why has this not been cloned, when will this be released to RHEL 7.2? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2283.html |