RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1167277 - selinux prevents hosted engine to be deployed on RHEL 6.6 with iscsi support
Summary: selinux prevents hosted engine to be deployed on RHEL 6.6 with iscsi support
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: selinux-policy
Version: 6.7
Hardware: All
OS: Linux
high
urgent
Target Milestone: rc
: ---
Assignee: Miroslav Grepl
QA Contact: Milos Malik
URL:
Whiteboard: storage
Depends On:
Blocks: 1159946 1160808 1169775
TreeView+ depends on / blocked
 
Reported: 2014-11-24 11:10 UTC by Miroslav Grepl
Modified: 2015-07-22 07:09 UTC (History)
22 users (show)

Fixed In Version: selinux-policy-3.7.19-261.el6
Doc Type: Bug Fix
Doc Text:
Clone Of: 1160808
: 1169775 (view as bug list)
Environment:
Last Closed: 2015-07-22 07:09:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1375 0 normal SHIPPED_LIVE selinux-policy bug fix and enhancement update 2015-07-20 18:07:47 UTC

Description Miroslav Grepl 2014-11-24 11:10:38 UTC
+++ This bug was initially created as a clone of Bug #1160808 +++

Description of problem:
deploying hosted engine via iSCSI on RHEL 6.6 hosts fails due to selinux denials.

Version-Release number of selected component (if applicable):
# rpm -qa|egrep "(selinux-policy|libvirt|qemu)"|sort 
gpxe-roms-qemu-0.9.7-6.12.el6.noarch
libvirt-0.10.2-46.el6_6.1.x86_64
libvirt-client-0.10.2-46.el6_6.1.x86_64
libvirt-lock-sanlock-0.10.2-46.el6_6.1.x86_64
libvirt-python-0.10.2-46.el6_6.1.x86_64
qemu-img-rhev-0.12.1.2-2.448.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.448.el6.x86_64
selinux-policy-3.7.19-260.el6.noarch
selinux-policy-targeted-3.7.19-260.el6.noarch

RHEV-M 3.5.0 vt8

How reproducible:
100%

Steps to Reproduce:
1. deploy hosted engine via iSCSI 

Actual results:
From hosted-engine setup:
[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
          Enter the name of the cluster to which you want to add the host (Default) [Default]: 
[ INFO  ] Waiting for the host to become operational in the engine. This may take several minutes...
[ ERROR ] The VDSM host was found in a failed state. Please check engine and bootstrap installation logs.
[ ERROR ] Unable to add hosted_engine_1 to the manager
          Please shutdown the VM allowing the system to launch it as a monitored service.
          The system will wait until the VM is down.
[ ERROR ] Failed to execute stage 'Closing up': [Errno 111] Connection refused
[ INFO  ] Stage: Clean up
[ ERROR ] Failed to execute stage 'Clean up': [Errno 111] Connection refused
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20141105163830.conf'


From VDSM logs:
Thread-73::DEBUG::2014-11-05 16:38:13,471::domainMonitor::201::Storage.DomainMonitorThread::(_monitorLoop) Unable to release the host id 1 for domain a4eed2bb-5acc-4056-8940-5cb55ccf1b6d
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/domainMonitor.py", line 198, in _monitorLoop
    self.domain.releaseHostId(self.hostId, unused=True)
  File "/usr/share/vdsm/storage/sd.py", line 480, in releaseHostId
    self._clusterLock.releaseHostId(hostId, async, unused)
  File "/usr/share/vdsm/storage/clusterlock.py", line 252, in releaseHostId
    raise se.ReleaseHostIdFailure(self._sdUUID, e)
ReleaseHostIdFailure: Cannot release host id: ('a4eed2bb-5acc-4056-8940-5cb55ccf1b6d', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy'))
VM Channels Listener::INFO::2014-11-05 16:38:13,472::vmchannels::183::vds::(run) VM channels listener thread has ended.


From SELinux logs:
----
time->Wed Nov  5 16:40:08 2014
type=SYSCALL msg=audit(1415202008.743:1587): arch=c000003e syscall=6 success=yes exit=0 a0=7fffef0a8e10 a1=7fffef0a4180 a2=7fffef0a4180 a3=6 items=0 ppid=1838 pid=2074 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null)
type=AVC msg=audit(1415202008.743:1587): avc:  denied  { getattr } for  pid=2074 comm="python" path="/dev/.udev/db/block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file
----
time->Wed Nov  5 16:40:08 2014
type=SYSCALL msg=audit(1415202008.743:1588): arch=c000003e syscall=2 success=yes exit=6 a0=7fffef0a8e10 a1=0 a2=1b6 a3=0 items=0 ppid=1838 pid=2074 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null)
type=AVC msg=audit(1415202008.743:1588): avc:  denied  { open } for  pid=2074 comm="python" name="block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file
type=AVC msg=audit(1415202008.743:1588): avc:  denied  { read } for  pid=2074 comm="python" name="block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file


Expected results:
the deploy should succeed

Additional info:
recently we faced similar issue over EL7, see https://bugzilla.redhat.com/show_bug.cgi?id=1146529

--- Additional comment from Allon Mureinik on 2014-11-06 03:19:14 EST ---

Nir, doesn't the fix for bug 1127460 cover this one too?

--- Additional comment from Nir Soffer on 2014-11-06 04:13:48 EST ---

Simone: Why do you think this is related to storage?

Allon: I don't see any relation to bug 1127460. Did the hosted engine vm pause?

--- Additional comment from Elad on 2014-11-06 04:29:23 EST ---

Did you try to deploy the HE over a LUN which was used for a storage domain previously? 

Can you please attach the setup logs?

--- Additional comment from Simone Tiraboschi on 2014-11-06 05:54:32 EST ---

(In reply to Nir Soffer from comment #2)
> Simone: Why do you think this is related to storage?

Just cause I notice a sanlock failure, not really sure about that.

ReleaseHostIdFailure: Cannot release host id: ('a4eed2bb-5acc-4056-8940-5cb55ccf1b6d', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy'))

> Allon: I don't see any relation to bug 1127460. Did the hosted engine vm
> pause?

If I remember correctly no.


(In reply to Elad from comment #3)
> Did you try to deploy the HE over a LUN which was used for a storage domain
> previously? 

No, it was a fresh one.

> Can you please attach the setup logs?

Of course.

--- Additional comment from Simone Tiraboschi on 2014-11-06 07:16:18 EST ---



--- Additional comment from Simone Tiraboschi on 2014-11-06 07:17:39 EST ---



--- Additional comment from Simone Tiraboschi on 2014-11-06 07:21:17 EST ---



--- Additional comment from Miroslav Grepl on 2014-11-07 04:55:39 EST ---

I see

type=AVC msg=audit(1415260556.242:265555): avc:  denied  { getattr } for  pid=23130 comm="python" path="/dev/.udev/db/block:sr0" dev=devtmpfs ino=92089 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file
type=SYSCALL msg=audit(1415260556.242:265555): arch=c000003e syscall=6 success=yes exit=0 a0=7fff19386ff0 a1=7fff19382360 a2=7fff19382360 a3=6 items=0 ppid=1898 pid=23130 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null)
type=AVC msg=audit(1415260556.242:265556): avc:  denied  { read } for  pid=23130 comm="python" name="block:sr0" dev=devtmpfs ino=92089 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file
type=AVC msg=audit(1415260556.242:265556): avc:  denied  { open } for  pid=23130 comm="python" name="block:sr0" dev=devtmpfs ino=92089 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file


Did it work in permissive mode?

--- Additional comment from Simone Tiraboschi on 2014-11-07 05:05:53 EST ---

(In reply to Miroslav Grepl from comment #8)
> Did it work in permissive mode?

Yes it does

--- Additional comment from Miroslav Grepl on 2014-11-21 10:13:49 EST ---

Could you test it with

#grep rhev_agentd /var/log/audit/auditl.log |audit2allow -M mypol
#semodule -i mypol.pp

in enforcing mode?

--- Additional comment from Simone Tiraboschi on 2014-11-21 11:25:01 EST ---

It seams to work as expected after that(In reply to Miroslav Grepl from comment #10)
> Could you test it with
> 
> #grep rhev_agentd /var/log/audit/auditl.log |audit2allow -M mypol
> #semodule -i mypol.pp
> 
> in enforcing mode?

After that it seams to work as expected

--- Additional comment from Miroslav Grepl on 2014-11-24 06:09:13 EST ---

diff --git a/rhev.te b/rhev.te
index eeee78a..8b7aa12 100644
--- a/rhev.te
+++ b/rhev.te
@@ -93,6 +93,10 @@ optional_policy(`
 ')
 
 optional_policy(`
+    udev_read_db(rhev_agentd_t)
+')
+
+optional_policy(`

is needed.

Comment 2 Miroslav Grepl 2014-12-02 08:57:34 UTC
commit 7c4c4a6788e872b813a2abe854b39b871b9bb7e2
Author: Miroslav Grepl <mgrepl>
Date:   Tue Dec 2 09:55:01 2014 +0100

    Allow rhev-agentd to access /dev/.udev/db/block:sr0.

Comment 6 errata-xmlrpc 2015-07-22 07:09:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1375.html


Note You need to log in before you can comment on or make changes to this bug.