Bug 1315332 - Sanlock fails to acquire lock for ceph device due to SELinux denials
Sanlock fails to acquire lock for ceph device due to SELinux denials
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: selinux-policy (Show other bugs)
7.2
x86_64 Linux
high Severity high
: rc
: ---
Assigned To: Lukas Vrabec
Milos Malik
: Reopened, ZStream
Depends On:
Blocks: 1365640
  Show dependency treegraph
 
Reported: 2016-03-07 08:43 EST by Elad
Modified: 2016-11-03 22:44 EDT (History)
19 users (show)

See Also:
Fixed In Version: selinux-policy-3.13.1-77.el7
Doc Type: Bug Fix
Doc Text:
Due to insufficient SELinux policy rules, sanlock domain was previously not able to access a CEPH file system. As a consequence, sanlock failed to acquire a lock for a CEPH device. The SELinux policy rules have been updated. As a result, the CEPH file system is now correctly labeled as cephfs_t and accessible by the sanlock domain.
Story Points: ---
Clone Of:
: 1365640 (view as bug list)
Environment:
Last Closed: 2016-11-03 22:44:03 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amureini: needinfo+


Attachments (Terms of Use)
/var/log/ (4.21 MB, application/x-gzip)
2016-03-07 08:43 EST, Elad
no flags Details

  None (edit)
Description Elad 2016-03-07 08:43:03 EST
Created attachment 1133786 [details]
/var/log/

Description of problem:
While working on Bug 1095615 - Allow the use of CephFS as a storage domain within RHEV, we discovered that Sanlock tries to access the 'ids' file of dev="ceph" and denied by SELinux for read and write: 

type=AVC msg=audit(1457356943.727:329): avc:  denied  { read write } for  pid=30337 comm="sanlock" name="ids" dev="ceph" ino=1099511627891 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:obje
ct_r:unlabeled_t:s0 tclass=file

This blocks ceph fs integration with RHEV while SELinux is Enforcing.

Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.2 Beta (Maipo)
Kernel - 3.10.0-327.13.1.el7.x86_64 #1 SMP Mon Feb 29 13:22:02 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch
libselinux-ruby-2.2.2-6.el7.x86_64
libselinux-2.2.2-6.el7.x86_64
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-60.el7_2.3.noarch
sanlock-3.2.4-2.el7_2.x86_64
vdsm-4.17.23-0.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64

How reproducible:
Always

Steps to Reproduce:
1. RHEV: Create a storage domain of POSIXFS compliant with ceph VFS type


Actual results:
Storage domain cannot be attached to the DC because Sanlock fails to acquire lock:

sanlock.log:

2016-03-07 15:20:10+0200 363554 [706]: s1:r3 resource a2cd2f8a-26b7-4abd-9572-48106ca7a0b7:SDM:/rhev/data-center/mnt/10.35.64.11:_vol_RHEV_Storage_elad_2/a2cd2f8a-26b7-4abd-9572-48106ca7a0b7/dom_md/leases:1048576 for 3,13,29690
2016-03-07 15:22:23+0200 363687 [11255]: s8 lockspace f15bae05-29e6-4990-9404-4931184dcf3b:3:/rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids:0
2016-03-07 15:22:23+0200 363687 [30337]: open error -13 /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids
2016-03-07 15:22:23+0200 363687 [30337]: s8 open_disk /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids error -13
2016-03-07 15:22:24+0200 363688 [11255]: s8 add_lockspace fail result -19


vdsm.log:

jsonrpc.Executor/1::ERROR::2016-03-07 15:22:24,729::task::866::Storage.TaskManager.Task::(_setError) Task=`33641892-3c2c-4b70-b3b3-1cffdbfb3921`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1210, in attachStorageDomain
    pool.attachSD(sdUUID)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 940, in attachSD
    dom.acquireHostId(self.id)
  File "/usr/share/vdsm/storage/sd.py", line 533, in acquireHostId
    self._clusterLock.acquireHostId(hostId, async)
  File "/usr/share/vdsm/storage/clusterlock.py", line 234, in acquireHostId
    raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: (u'f15bae05-29e6-4990-9404-4931184dcf3b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))




Additional info:
/var/log/
Comment 3 Yaniv Lavi (Dary) 2016-03-10 07:54:47 EST
Bronce, please add this to 4.0 tracking. We would want it fixed as soon as possible, since if this happens early enough we can add it to 3.6.z.
Comment 4 Miroslav Grepl 2016-03-14 04:02:08 EDT
How/where is name="ids" created?
Comment 5 Allon Mureinik 2016-03-14 06:06:20 EDT
(In reply to Miroslav Grepl from comment #4)
> How/where is name="ids" created?

VDSM creates it when initializing the domain. Miroslav - can you elaborate what details exactly you need here?
Comment 6 Miroslav Grepl 2016-03-14 07:46:19 EDT
OK I overlooked.

/rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids 

we have still the same issue here with mislabeling of /rhev/data-center. Have you had SELinux disabled?
Comment 7 Elad 2016-03-14 08:07:20 EDT
SELinux was enforcing
Comment 8 Miroslav Grepl 2016-03-15 10:18:15 EDT
How did you mount

/rhev/data-center

?
Comment 10 Elad 2016-03-17 03:50:17 EDT
/rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows:

jsonrpc.Executor/1::DEBUG::2016-03-07 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222 /rhev/data-center/mnt/10.35.65.18:_222 (cwd None)
Comment 11 Yaniv Lavi (Dary) 2016-03-21 10:31:35 EDT
Any updates on this?
Comment 12 Yaniv Lavi (Dary) 2016-03-27 05:48:27 EDT
Can we get acks for this for 7.3?
Comment 13 Yaniv Lavi (Dary) 2016-03-27 05:49:15 EDT
And mark this for 7.2.z?
Comment 14 Miroslav Grepl 2016-03-29 10:21:42 EDT
(In reply to Elad from comment #10)
> /rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows:
> 
> jsonrpc.Executor/1::DEBUG::2016-03-07
> 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o
> name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222
> /rhev/data-center/mnt/10.35.65.18:_222 (cwd None)

Ok you will need to specify a mount SELinux option to label it with "system_u:object_r:mnt_t:s0" or fix labeling after the mount.

unlabeled_t indicates something has gone very wrong and needs to be dealt with by a real person. In this case, you mount ceph filesystem.
Comment 15 Yaniv Lavi (Dary) 2016-03-29 10:28:21 EDT
How is a action like mounting ceph filesystem consider a very wrong flow that needs manual steps or unstandard mounting? 
This is not a one off, it recreates consistently and we need to this to work to certify CephFS.
Comment 16 Lukas Vrabec 2016-03-29 11:30:40 EDT
I would say that we need to create new fstype for ceph filesystem, like cephfs_t. And then we add allow rules to sanlock policy. 
For example: 
type cephfs_t;
fs_type(cephfs_t)
genfscon ceph / gen_context(system_u:object_r:cephfs_t,s0)

This solution will increase security from SELinux point of view and we don't need to allow sanlock_t domain to read/write mnt_t files.
Comment 17 Miroslav Grepl 2016-03-31 04:25:27 EDT
(In reply to Yaniv Dary from comment #15)
> How is a action like mounting ceph filesystem consider a very wrong flow
> that needs manual steps or unstandard mounting? 
> This is not a one off, it recreates consistently and we need to this to work
> to certify CephFS.

Do you have a test machine where we could play around?

Thank you.
Comment 20 Tal Nisan 2016-04-14 04:47:43 EDT
Restoring needinfo by Yaniv on Bronce that was lost by one of the comments.
Bronce, can we mark this for 7.2.z?
Comment 22 Yaniv Lavi (Dary) 2016-04-26 05:46:24 EDT
Can we get a devel ack on this?
Comment 23 Yaniv Lavi (Dary) 2016-05-02 07:16:19 EDT
Bronce, can you help with getting a QE ack here? This is blocking a feature.
Comment 25 Yaniv Lavi (Dary) 2016-05-09 08:41:29 EDT
Any updates?
Comment 26 Lukas Vrabec 2016-05-25 07:10:15 EDT
Fixed added to Fedora Rawhide and Fedora 24. After some testing will be backported to rhel-7.3.
Comment 37 Elad 2016-07-13 10:08:47 EDT
We're working on deployment of a new cephfs setup, once done I'll check the fix.
Comment 38 Yaniv Kaul 2016-07-21 10:11:55 EDT
(In reply to Elad from comment #37)
> We're working on deployment of a new cephfs setup, once done I'll check the
> fix.

Elad - any updates?
Comment 39 Aharon Canan 2016-07-21 10:18:58 EDT
(In reply to Yaniv Kaul from comment #38)
> (In reply to Elad from comment #37)
> > We're working on deployment of a new cephfs setup, once done I'll check the
> > fix.
> 
> Elad - any updates?

Not yet, we are waiting for builds from Ceph team. 
We will verify as soon as we get it.
Comment 40 Elad 2016-07-26 06:40:42 EDT
We have an operational ceph setup with cephfs deployed on both the ceph servers and on the client (RHEL hypervisor).

I'm trying to install the provided SELinux rpms in comment #36 but it fails with many deps issues of many packages.

Will you be able to provide us a host with these SELinux packages installed?
Comment 43 Yaniv Lavi (Dary) 2016-08-01 07:20:20 EDT
Why has this not been cloned, when will this be released to RHEL 7.2?
Comment 48 errata-xmlrpc 2016-11-03 22:44:03 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2283.html

Note You need to log in before you can comment on or make changes to this bug.