Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Due to insufficient SELinux policy rules, sanlock domain was previously not able to access a CEPH file system. As a consequence, sanlock failed to acquire a lock for a CEPH device. The SELinux policy rules have been updated. As a result, the CEPH file system is now correctly labeled as cephfs_t and accessible by the sanlock domain.
Created attachment 1133786[details]
/var/log/
Description of problem:
While working on Bug 1095615 - Allow the use of CephFS as a storage domain within RHEV, we discovered that Sanlock tries to access the 'ids' file of dev="ceph" and denied by SELinux for read and write:
type=AVC msg=audit(1457356943.727:329): avc: denied { read write } for pid=30337 comm="sanlock" name="ids" dev="ceph" ino=1099511627891 scontext=system_u:system_r:sanlock_t:s0-s0:c0.c1023 tcontext=system_u:obje
ct_r:unlabeled_t:s0 tclass=file
This blocks ceph fs integration with RHEV while SELinux is Enforcing.
Version-Release number of selected component (if applicable):
Red Hat Enterprise Linux Server release 7.2 Beta (Maipo)
Kernel - 3.10.0-327.13.1.el7.x86_64 #1 SMP Mon Feb 29 13:22:02 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
libselinux-python-2.2.2-6.el7.x86_64
selinux-policy-targeted-3.13.1-60.el7_2.3.noarch
libselinux-ruby-2.2.2-6.el7.x86_64
libselinux-2.2.2-6.el7.x86_64
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.13.1-60.el7_2.3.noarch
sanlock-3.2.4-2.el7_2.x86_64
vdsm-4.17.23-0.el7ev.noarch
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
qemu-kvm-rhev-2.3.0-31.el7_2.8.x86_64
How reproducible:
Always
Steps to Reproduce:
1. RHEV: Create a storage domain of POSIXFS compliant with ceph VFS type
Actual results:
Storage domain cannot be attached to the DC because Sanlock fails to acquire lock:
sanlock.log:
2016-03-07 15:20:10+0200 363554 [706]: s1:r3 resource a2cd2f8a-26b7-4abd-9572-48106ca7a0b7:SDM:/rhev/data-center/mnt/10.35.64.11:_vol_RHEV_Storage_elad_2/a2cd2f8a-26b7-4abd-9572-48106ca7a0b7/dom_md/leases:1048576 for 3,13,29690
2016-03-07 15:22:23+0200 363687 [11255]: s8 lockspace f15bae05-29e6-4990-9404-4931184dcf3b:3:/rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids:0
2016-03-07 15:22:23+0200 363687 [30337]: open error -13 /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids
2016-03-07 15:22:23+0200 363687 [30337]: s8 open_disk /rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids error -13
2016-03-07 15:22:24+0200 363688 [11255]: s8 add_lockspace fail result -19
vdsm.log:
jsonrpc.Executor/1::ERROR::2016-03-07 15:22:24,729::task::866::Storage.TaskManager.Task::(_setError) Task=`33641892-3c2c-4b70-b3b3-1cffdbfb3921`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 1210, in attachStorageDomain
pool.attachSD(sdUUID)
File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 940, in attachSD
dom.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 533, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 234, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id: (u'f15bae05-29e6-4990-9404-4931184dcf3b', SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))
Additional info:
/var/log/
(In reply to Miroslav Grepl from comment #4)
> How/where is name="ids" created?
VDSM creates it when initializing the domain. Miroslav - can you elaborate what details exactly you need here?
OK I overlooked.
/rhev/data-center/mnt/10.35.65.18:_222/f15bae05-29e6-4990-9404-4931184dcf3b/dom_md/ids
we have still the same issue here with mislabeling of /rhev/data-center. Have you had SELinux disabled?
(In reply to Elad from comment #10)
> /rhev/data-center/mnt/10.35.65.18:_222 was mounted as follows:
>
> jsonrpc.Executor/1::DEBUG::2016-03-07
> 15:22:15,753::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/taskset
> --cpu-list 0-7 /usr/bin/sudo -n /usr/bin/mount -t ceph -o
> name=admin,secret=AQC3W1dWhplVLBAARW/zKtQzjafZDKAGfVpWbQ== 10.35.65.18:/222
> /rhev/data-center/mnt/10.35.65.18:_222 (cwd None)
Ok you will need to specify a mount SELinux option to label it with "system_u:object_r:mnt_t:s0" or fix labeling after the mount.
unlabeled_t indicates something has gone very wrong and needs to be dealt with by a real person. In this case, you mount ceph filesystem.
How is a action like mounting ceph filesystem consider a very wrong flow that needs manual steps or unstandard mounting?
This is not a one off, it recreates consistently and we need to this to work to certify CephFS.
I would say that we need to create new fstype for ceph filesystem, like cephfs_t. And then we add allow rules to sanlock policy.
For example:
type cephfs_t;
fs_type(cephfs_t)
genfscon ceph / gen_context(system_u:object_r:cephfs_t,s0)
This solution will increase security from SELinux point of view and we don't need to allow sanlock_t domain to read/write mnt_t files.
(In reply to Yaniv Dary from comment #15)
> How is a action like mounting ceph filesystem consider a very wrong flow
> that needs manual steps or unstandard mounting?
> This is not a one off, it recreates consistently and we need to this to work
> to certify CephFS.
Do you have a test machine where we could play around?
Thank you.
(In reply to Yaniv Kaul from comment #38)
> (In reply to Elad from comment #37)
> > We're working on deployment of a new cephfs setup, once done I'll check the
> > fix.
>
> Elad - any updates?
Not yet, we are waiting for builds from Ceph team.
We will verify as soon as we get it.
We have an operational ceph setup with cephfs deployed on both the ceph servers and on the client (RHEL hypervisor).
I'm trying to install the provided SELinux rpms in comment #36 but it fails with many deps issues of many packages.
Will you be able to provide us a host with these SELinux packages installed?
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHBA-2016-2283.html