Description of problem: deploying hosted engine via iSCSI on RHEL 6.6 hosts fails due to selinux denials. Version-Release number of selected component (if applicable): # rpm -qa|egrep "(selinux-policy|libvirt|qemu)"|sort gpxe-roms-qemu-0.9.7-6.12.el6.noarch libvirt-0.10.2-46.el6_6.1.x86_64 libvirt-client-0.10.2-46.el6_6.1.x86_64 libvirt-lock-sanlock-0.10.2-46.el6_6.1.x86_64 libvirt-python-0.10.2-46.el6_6.1.x86_64 qemu-img-rhev-0.12.1.2-2.448.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.448.el6.x86_64 selinux-policy-3.7.19-260.el6.noarch selinux-policy-targeted-3.7.19-260.el6.noarch RHEV-M 3.5.0 vt8 How reproducible: 100% Steps to Reproduce: 1. deploy hosted engine via iSCSI Actual results: From hosted-engine setup: [ INFO ] Engine replied: DB Up!Welcome to Health Status! Enter the name of the cluster to which you want to add the host (Default) [Default]: [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ ERROR ] The VDSM host was found in a failed state. Please check engine and bootstrap installation logs. [ ERROR ] Unable to add hosted_engine_1 to the manager Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down. [ ERROR ] Failed to execute stage 'Closing up': [Errno 111] Connection refused [ INFO ] Stage: Clean up [ ERROR ] Failed to execute stage 'Clean up': [Errno 111] Connection refused [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20141105163830.conf' From VDSM logs: Thread-73::DEBUG::2014-11-05 16:38:13,471::domainMonitor::201::Storage.DomainMonitorThread::(_monitorLoop) Unable to release the host id 1 for domain a4eed2bb-5acc-4056-8940-5cb55ccf1b6d Traceback (most recent call last): File "/usr/share/vdsm/storage/domainMonitor.py", line 198, in _monitorLoop self.domain.releaseHostId(self.hostId, unused=True) File "/usr/share/vdsm/storage/sd.py", line 480, in releaseHostId self._clusterLock.releaseHostId(hostId, async, unused) File "/usr/share/vdsm/storage/clusterlock.py", line 252, in releaseHostId raise se.ReleaseHostIdFailure(self._sdUUID, e) ReleaseHostIdFailure: Cannot release host id: ('a4eed2bb-5acc-4056-8940-5cb55ccf1b6d', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy')) VM Channels Listener::INFO::2014-11-05 16:38:13,472::vmchannels::183::vds::(run) VM channels listener thread has ended. From SELinux logs: ---- time->Wed Nov 5 16:40:08 2014 type=SYSCALL msg=audit(1415202008.743:1587): arch=c000003e syscall=6 success=yes exit=0 a0=7fffef0a8e10 a1=7fffef0a4180 a2=7fffef0a4180 a3=6 items=0 ppid=1838 pid=2074 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null) type=AVC msg=audit(1415202008.743:1587): avc: denied { getattr } for pid=2074 comm="python" path="/dev/.udev/db/block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file ---- time->Wed Nov 5 16:40:08 2014 type=SYSCALL msg=audit(1415202008.743:1588): arch=c000003e syscall=2 success=yes exit=6 a0=7fffef0a8e10 a1=0 a2=1b6 a3=0 items=0 ppid=1838 pid=2074 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null) type=AVC msg=audit(1415202008.743:1588): avc: denied { open } for pid=2074 comm="python" name="block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file type=AVC msg=audit(1415202008.743:1588): avc: denied { read } for pid=2074 comm="python" name="block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file Expected results: the deploy should succeed Additional info: recently we faced similar issue over EL7, see https://bugzilla.redhat.com/show_bug.cgi?id=1146529
Nir, doesn't the fix for bug 1127460 cover this one too?
Simone: Why do you think this is related to storage? Allon: I don't see any relation to bug 1127460. Did the hosted engine vm pause?
Did you try to deploy the HE over a LUN which was used for a storage domain previously? Can you please attach the setup logs?
(In reply to Nir Soffer from comment #2) > Simone: Why do you think this is related to storage? Just cause I notice a sanlock failure, not really sure about that. ReleaseHostIdFailure: Cannot release host id: ('a4eed2bb-5acc-4056-8940-5cb55ccf1b6d', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy')) > Allon: I don't see any relation to bug 1127460. Did the hosted engine vm > pause? If I remember correctly no. (In reply to Elad from comment #3) > Did you try to deploy the HE over a LUN which was used for a storage domain > previously? No, it was a fresh one. > Can you please attach the setup logs? Of course.
Created attachment 954422 [details] ovirt-hosted-engine-setup
Created attachment 954423 [details] vdsm
Created attachment 954424 [details] audit
I see type=AVC msg=audit(1415260556.242:265555): avc: denied { getattr } for pid=23130 comm="python" path="/dev/.udev/db/block:sr0" dev=devtmpfs ino=92089 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file type=SYSCALL msg=audit(1415260556.242:265555): arch=c000003e syscall=6 success=yes exit=0 a0=7fff19386ff0 a1=7fff19382360 a2=7fff19382360 a3=6 items=0 ppid=1898 pid=23130 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null) type=AVC msg=audit(1415260556.242:265556): avc: denied { read } for pid=23130 comm="python" name="block:sr0" dev=devtmpfs ino=92089 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file type=AVC msg=audit(1415260556.242:265556): avc: denied { open } for pid=23130 comm="python" name="block:sr0" dev=devtmpfs ino=92089 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file Did it work in permissive mode?
(In reply to Miroslav Grepl from comment #8) > Did it work in permissive mode? Yes it does
Could you test it with #grep rhev_agentd /var/log/audit/auditl.log |audit2allow -M mypol #semodule -i mypol.pp in enforcing mode?
It seams to work as expected after that(In reply to Miroslav Grepl from comment #10) > Could you test it with > > #grep rhev_agentd /var/log/audit/auditl.log |audit2allow -M mypol > #semodule -i mypol.pp > > in enforcing mode? After that it seams to work as expected
diff --git a/rhev.te b/rhev.te index eeee78a..8b7aa12 100644 --- a/rhev.te +++ b/rhev.te @@ -93,6 +93,10 @@ optional_policy(` ') optional_policy(` + udev_read_db(rhev_agentd_t) +') + +optional_policy(` is needed.
Miroslav, isn't the dependency reversed here? IIUC, bug 1167277 should supply a new selinux-policy and then RHEV should consume it (this bug)?
Feel free to edit it.
should this block GA? - workaround is simple, switch selinux to permissive, after deployment switch it back...
(In reply to Michal Skrivanek from comment #15) > should this block GA? - workaround is simple, switch selinux to permissive, > after deployment switch it back... I'm fine with not blocking GA on this, but not my call. Ultimately, a PM should ack/nack this. Doron - you understand HE better than me - your two cents here?
(In reply to Allon Mureinik from comment #16) > (In reply to Michal Skrivanek from comment #15) > > should this block GA? - workaround is simple, switch selinux to permissive, > > after deployment switch it back... > I'm fine with not blocking GA on this, but not my call. > Ultimately, a PM should ack/nack this. > > Doron - you understand HE better than me - your two cents here? Since the RHEL bug 1167277 moved to MODIFIED we should be fine now. So no point of keeping this one as a blocker.
We need a patch to update vdsm.spec.in to require this rpm once its out. If this indeed solves the issue, a customer could simply yum upgrade selinux-policy-targeted to avoid this issue. Ugly, but not a blocker - assuming RHEV's QA team can verify this.
Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)?
(In reply to Allon Mureinik from comment #19) > Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 > (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)? Allon, In case we are using spesific pkg version which is not part of the regular installation I am not sure we can set it to on_qa, is it going to be part of the dependencies?
(In reply to Aharon Canan from comment #20) > (In reply to Allon Mureinik from comment #19) > > Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 > > (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)? > > Allon, > > In case we are using spesific pkg version which is not part of the regular > installation I am not sure we can set it to on_qa, > is it going to be part of the dependencies? obviously.
(In reply to Aharon Canan from comment #20) > (In reply to Allon Mureinik from comment #19) > > Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 > > (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)? > > Allon, > > In case we are using spesific pkg version which is not part of the regular > installation I am not sure we can set it to on_qa, > is it going to be part of the dependencies? On second thought, you're right. We can proceed in two directions here: 1. dev - should add a dependency in VDSM (in the works, see http://gerrit.ovirt.org/#/c/35973) 2. qa - can, if they, wish, test by manually yum upgrading. Moving bug back to POST.
(In reply to Allon Mureinik from comment #22) > (In reply to Aharon Canan from comment #20) > > (In reply to Allon Mureinik from comment #19) > > > Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 > > > (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)? > > > > Allon, > > > > In case we are using spesific pkg version which is not part of the regular > > installation I am not sure we can set it to on_qa, > > is it going to be part of the dependencies? > On second thought, you're right. > > We can proceed in two directions here: > 1. dev - should add a dependency in VDSM (in the works, see > http://gerrit.ovirt.org/#/c/35973) > 2. qa - can, if they, wish, test by manually yum upgrading. > > Moving bug back to POST. Allon, I'm unable to deploy hosted-engine due to https://bugzilla.redhat.com/show_bug.cgi?id=1167074
(In reply to Elad from comment #23) > (In reply to Allon Mureinik from comment #22) > > (In reply to Aharon Canan from comment #20) > > > (In reply to Allon Mureinik from comment #19) > > > > Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 > > > > (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)? > > > > > > Allon, > > > > > > In case we are using spesific pkg version which is not part of the regular > > > installation I am not sure we can set it to on_qa, > > > is it going to be part of the dependencies? > > On second thought, you're right. > > > > We can proceed in two directions here: > > 1. dev - should add a dependency in VDSM (in the works, see > > http://gerrit.ovirt.org/#/c/35973) > > 2. qa - can, if they, wish, test by manually yum upgrading. > > > > Moving bug back to POST. > > Allon, I'm unable to deploy hosted-engine due to > https://bugzilla.redhat.com/show_bug.cgi?id=1167074 I managed to deploy using the default SElinux policy, will try using https://brewweb.devel.redhat.com/buildinfo?buildID=401412
(In reply to Allon Mureinik from comment #19) > Can we please verify this with selinux-policy-3.7.19-260.el6_6.1 > (https://brewweb.devel.redhat.com/buildinfo?buildID=401412)? Checked deployment using: RHEL6.6 libselinux-utils-2.0.94-5.8.el6.x86_64 libselinux-2.0.94-5.8.el6.x86_64 selinux-policy-targeted-3.7.19-260.el6_6.1.noarch libselinux-ruby-2.0.94-5.8.el6.x86_64 libselinux-python-2.0.94-5.8.el6.x86_64 selinux-policy-3.7.19-260.el6_6.1.noarch ovirt-hosted-engine-setup-1.2.1-7.el6ev.noarch vdsm-4.16.8.1-2.el6ev.x86_64 Deployment went fine
Cannot be tested due to https://bugzilla.redhat.com/show_bug.cgi?id=1171452
I managed to deploy iSCSI on a RHEL6.6 host with the following packages installed: libselinux-utils-2.0.94-5.8.el6.x86_64 libselinux-ruby-2.0.94-5.8.el6.x86_64 selinux-policy-targeted-3.7.19-260.el6_6.1.noarch libselinux-2.0.94-5.8.el6.x86_64 libselinux-python-2.0.94-5.8.el6.x86_64 selinux-policy-3.7.19-260.el6_6.1.noarch vdsm-4.16.8.1-4.el6ev.x86_64 ovirt-hosted-engine-ha-1.2.4-5.el6ev.noarch ovirt-hosted-engine-setup-1.2.1-8.el6ev.noarch sanlock-2.8-1.el6.x86_64
*Used rhev 3.5 vt13.5