RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1315332 - Sanlock fails to acquire lock for ceph device due to SELinux denials
Summary: Sanlock fails to acquire lock for ceph device due to SELinux denials
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Lukas Vrabec
QA Contact: Milos Malik
URL:
Whiteboard:
Depends On:
Blocks: 1365640
TreeView+ depends on / blocked
 
Reported: 2016-03-07 13:43 UTC by Elad
Modified: 2016-11-04 02:44 UTC (History)
19 users (show)

Fixed In Version: selinux-policy-3.13.1-77.el7
Doc Type: Bug Fix
Doc Text:
Due to insufficient SELinux policy rules, sanlock domain was previously not able to access a CEPH file system. As a consequence, sanlock failed to acquire a lock for a CEPH device. The SELinux policy rules have been updated. As a result, the CEPH file system is now correctly labeled as cephfs_t and accessible by the sanlock domain.
Clone Of:
: 1365640 (view as bug list)
Environment:
Last Closed: 2016-11-04 02:44:03 UTC
Target Upstream Version:
Embargoed:
amureini: needinfo+


Attachments (Terms of Use)
/var/log/ (4.21 MB, application/x-gzip)
2016-03-07 13:43 UTC, Elad
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2283 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2016-11-03 13:36:25 UTC

Description Elad 2016-03-07 13:43:03 UTC
Created attachment 1133786 [details]
/var/log/

Description of problem:
While working on Bug 1095615 - Allow the use of CephFS as a storage domain within RHEV, we discovered that Sanlock tries to access the 'ids' file of dev="ceph" and denied by SELinux for read and write: 

type=AVC msg=audit(1457356943.727:329): avc:  denied  { read write } for  pid=30337 comm="sanlock" name="ids" dev="ceph" ino=1099511627891 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:obje
ct_r:unlabeled_t:s0 tclass=file

This blocks ceph fs integration with RHEV while SELinux is Enforcing.

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.2 Beta (Maipo)
Kernel - 3.10.0-327.13.1.el7.x86_64 #1 SMP Mon Feb 29 13:22:02 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch
libselinux-ruby-2.2.2-6.el7.x86_64
libselinux-2.2.2-6.el7.x86_64
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-60.el7_2.3.noarch
sanlock-3.2.4-2.el7_2.x86_64
vdsm-4.17.23-0.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64

How reproducible:
Always

Steps to Reproduce:
1. RHEV: Create a storage domain of POSIXFS compliant with ceph VFS type


Actual results:
Storage domain cannot be attached to the DC because Sanlock fails to acquire lock:

sanlock.log:

2016-03-07 15:20:10+0200 363554 [706]: s1:r3 resource a2cd2f8a-26b7-4abd-9572-48106ca7a0b7:SDM:/rhev/data-center/mnt/10.35.64.11:_vol_RHEV_Storage_elad_2/a2cd2f8a-26b7-4abd-9572-48106ca7a0b7/dom_md/leases:1048576 for 3,13,29690
2016-03-07 15:22:23+0200 363687 [11255]: s8 lockspace f15bae05-29e6-4990-9404-4931184dcf3b:3:/rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids:0
2016-03-07 15:22:23+0200 363687 [30337]: open error -13 /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids
2016-03-07 15:22:23+0200 363687 [30337]: s8 open_disk /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids error -13
2016-03-07 15:22:24+0200 363688 [11255]: s8 add_lockspace fail result -19


vdsm.log:

jsonrpc.Executor/1::ERROR::2016-03-07 15:22:24,729::task::866::Storage.TaskManager.Task::(_setError) Task=`33641892-3c2c-4b70-b3b3-1cffdbfb3921`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1210, in attachStorageDomain
    pool.attachSD(sdUUID)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 940, in attachSD
    dom.acquireHostId(self.id)
  File "/usr/share/vdsm/storage/sd.py", line 533, in acquireHostId
    self._clusterLock.acquireHostId(hostId, async)
  File "/usr/share/vdsm/storage/clusterlock.py", line 234, in acquireHostId
    raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: (u'f15bae05-29e6-4990-9404-4931184dcf3b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))




Additional info:
/var/log/

Comment 3 Yaniv Lavi 2016-03-10 12:54:47 UTC
Bronce, please add this to 4.0 tracking. We would want it fixed as soon as possible, since if this happens early enough we can add it to 3.6.z.

Comment 4 Miroslav Grepl 2016-03-14 08:02:08 UTC
How/where is name="ids" created?

Comment 5 Allon Mureinik 2016-03-14 10:06:20 UTC
(In reply to Miroslav Grepl from comment #4)
> How/where is name="ids" created?

VDSM creates it when initializing the domain. Miroslav - can you elaborate what details exactly you need here?

Comment 6 Miroslav Grepl 2016-03-14 11:46:19 UTC
OK I overlooked.

/rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids 

we have still the same issue here with mislabeling of /rhev/data-center. Have you had SELinux disabled?

Comment 7 Elad 2016-03-14 12:07:20 UTC
SELinux was enforcing

Comment 8 Miroslav Grepl 2016-03-15 14:18:15 UTC
How did you mount

/rhev/data-center

?

Comment 10 Elad 2016-03-17 07:50:17 UTC
/rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows:

jsonrpc.Executor/1::DEBUG::2016-03-07 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222 /rhev/data-center/mnt/10.35.65.18:_222 (cwd None)

Comment 11 Yaniv Lavi 2016-03-21 14:31:35 UTC
Any updates on this?

Comment 12 Yaniv Lavi 2016-03-27 09:48:27 UTC
Can we get acks for this for 7.3?

Comment 13 Yaniv Lavi 2016-03-27 09:49:15 UTC
And mark this for 7.2.z?

Comment 14 Miroslav Grepl 2016-03-29 14:21:42 UTC
(In reply to Elad from comment #10)
> /rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows:
> 
> jsonrpc.Executor/1::DEBUG::2016-03-07
> 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o
> name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222
> /rhev/data-center/mnt/10.35.65.18:_222 (cwd None)

Ok you will need to specify a mount SELinux option to label it with "system_u:object_r:mnt_t:s0" or fix labeling after the mount.

unlabeled_t indicates something has gone very wrong and needs to be dealt with by a real person. In this case, you mount ceph filesystem.

Comment 15 Yaniv Lavi 2016-03-29 14:28:21 UTC
How is a action like mounting ceph filesystem consider a very wrong flow that needs manual steps or unstandard mounting? 
This is not a one off, it recreates consistently and we need to this to work to certify CephFS.

Comment 16 Lukas Vrabec 2016-03-29 15:30:40 UTC
I would say that we need to create new fstype for ceph filesystem, like cephfs_t. And then we add allow rules to sanlock policy. 
For example: 
type cephfs_t;
fs_type(cephfs_t)
genfscon ceph / gen_context(system_u:object_r:cephfs_t,s0)

This solution will increase security from SELinux point of view and we don't need to allow sanlock_t domain to read/write mnt_t files.

Comment 17 Miroslav Grepl 2016-03-31 08:25:27 UTC
(In reply to Yaniv Dary from comment #15)
> How is a action like mounting ceph filesystem consider a very wrong flow
> that needs manual steps or unstandard mounting? 
> This is not a one off, it recreates consistently and we need to this to work
> to certify CephFS.

Do you have a test machine where we could play around?

Thank you.

Comment 20 Tal Nisan 2016-04-14 08:47:43 UTC
Restoring needinfo by Yaniv on Bronce that was lost by one of the comments.
Bronce, can we mark this for 7.2.z?

Comment 22 Yaniv Lavi 2016-04-26 09:46:24 UTC
Can we get a devel ack on this?

Comment 23 Yaniv Lavi 2016-05-02 11:16:19 UTC
Bronce, can you help with getting a QE ack here? This is blocking a feature.

Comment 25 Yaniv Lavi 2016-05-09 12:41:29 UTC
Any updates?

Comment 26 Lukas Vrabec 2016-05-25 11:10:15 UTC
Fixed added to Fedora Rawhide and Fedora 24. After some testing will be backported to rhel-7.3.

Comment 37 Elad 2016-07-13 14:08:47 UTC
We're working on deployment of a new cephfs setup, once done I'll check the fix.

Comment 38 Yaniv Kaul 2016-07-21 14:11:55 UTC
(In reply to Elad from comment #37)
> We're working on deployment of a new cephfs setup, once done I'll check the
> fix.

Elad - any updates?

Comment 39 Aharon Canan 2016-07-21 14:18:58 UTC
(In reply to Yaniv Kaul from comment #38)
> (In reply to Elad from comment #37)
> > We're working on deployment of a new cephfs setup, once done I'll check the
> > fix.
> 
> Elad - any updates?

Not yet, we are waiting for builds from Ceph team. 
We will verify as soon as we get it.

Comment 40 Elad 2016-07-26 10:40:42 UTC
We have an operational ceph setup with cephfs deployed on both the ceph servers and on the client (RHEL hypervisor).

I'm trying to install the provided SELinux rpms in comment #36 but it fails with many deps issues of many packages.

Will you be able to provide us a host with these SELinux packages installed?

Comment 43 Yaniv Lavi 2016-08-01 11:20:20 UTC
Why has this not been cloned, when will this be released to RHEL 7.2?

Comment 48 errata-xmlrpc 2016-11-04 02:44:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2283.html


Note You need to log in before you can comment on or make changes to this bug.