Bug 1159946 - ISCSI HE deployment failed | Failed to execute stage 'Environment customization': <Fault 1: "<type 'exceptions.TypeError'>:cannot marshal None unless allow_none is enabled">
Summary: ISCSI HE deployment failed | Failed to execute stage 'Environment customizati...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 3.5.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 3.5.0
Assignee: Simone Tiraboschi
QA Contact: meital avital
URL:
Whiteboard: integration
Depends On: 1160808 1167277
Blocks: 1067162 1139019
TreeView+ depends on / blocked
 
Reported: 2014-11-03 16:37 UTC by Nikolai Sednev
Modified: 2015-02-12 14:06 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-12 14:06:39 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (22.17 KB, application/x-gzip)
2014-11-03 16:37 UTC, Nikolai Sednev
no flags Details

Description Nikolai Sednev 2014-11-03 16:37:41 UTC
Created attachment 953169 [details]
logs

Description of problem:
ISCSI HE deployment failed | Failed to execute stage 'Environment customization': <Fault 1: "<type 'exceptions.TypeError'>:cannot marshal None unless allow_none is enabled">

Version-Release number of selected component (if applicable):
vdsm-4.16.7.2-1.el6ev.x86_64
sanlock-2.8-1.el6.x86_64
ovirt-host-deploy-1.3.0-1.el6ev.noarch
libvirt-0.10.2-46.el6_6.1.x86_64
ovirt-hosted-engine-ha-1.2.4-1.el6ev.noarch
qemu-kvm-rhev-0.12.1.2-2.448.el6.x86_64
ovirt-hosted-engine-setup-1.2.1-2.el6ev.noarch


How reproducible:
100%

Steps to Reproduce:
1.Deploy HE using iscsi on any mapped block device with LUN for latest HE running over RHEL6.6.
2.
3.

Actual results:
HE deployment fails.

Expected results:
Deployment should succeed.

Additional info:

Comment 1 Sandro Bonazzola 2014-11-04 11:34:25 UTC
Please attach ovirt-hosted-engine-setup log, it's missing in attached tarball

Comment 2 Nikolai Sednev 2014-11-04 16:24:46 UTC
(In reply to Sandro Bonazzola from comment #1)
> Please attach ovirt-hosted-engine-setup log, it's missing in attached tarball

Sorry I don't have any available hosts, may you try to deploy on one of yours to get the log?
Deployment is really simple procedure and during which deployment fails.

Comment 3 Simone Tiraboschi 2014-11-05 16:44:23 UTC
First stages works well, latest fails do to an SELinux denial. I opened a bug against that.


From hosted-engine setup:
[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
          Enter the name of the cluster to which you want to add the host (Default) [Default]: 
[ INFO  ] Waiting for the host to become operational in the engine. This may take several minutes...
[ ERROR ] The VDSM host was found in a failed state. Please check engine and bootstrap installation logs.
[ ERROR ] Unable to add hosted_engine_1 to the manager
          Please shutdown the VM allowing the system to launch it as a monitored service.
          The system will wait until the VM is down.
[ ERROR ] Failed to execute stage 'Closing up': [Errno 111] Connection refused
[ INFO  ] Stage: Clean up
[ ERROR ] Failed to execute stage 'Clean up': [Errno 111] Connection refused
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20141105163830.conf'


From VDSM logs:
Thread-73::DEBUG::2014-11-05 16:38:13,471::domainMonitor::201::Storage.DomainMonitorThread::(_monitorLoop) Unable to release the host id 1 for domain a4eed2bb-5acc-4056-8940-5cb55ccf1b6d
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/domainMonitor.py", line 198, in _monitorLoop
    self.domain.releaseHostId(self.hostId, unused=True)
  File "/usr/share/vdsm/storage/sd.py", line 480, in releaseHostId
    self._clusterLock.releaseHostId(hostId, async, unused)
  File "/usr/share/vdsm/storage/clusterlock.py", line 252, in releaseHostId
    raise se.ReleaseHostIdFailure(self._sdUUID, e)
ReleaseHostIdFailure: Cannot release host id: ('a4eed2bb-5acc-4056-8940-5cb55ccf1b6d', SanlockException(16, 'Sanlock lockspace remove failure', 'Device or resource busy'))
VM Channels Listener::INFO::2014-11-05 16:38:13,472::vmchannels::183::vds::(run) VM channels listener thread has ended.


From SELinux logs:
----
time->Wed Nov  5 16:40:08 2014
type=SYSCALL msg=audit(1415202008.743:1587): arch=c000003e syscall=6 success=yes exit=0 a0=7fffef0a8e10 a1=7fffef0a4180 a2=7fffef0a4180 a3=6 items=0 ppid=1838 pid=2074 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null)
type=AVC msg=audit(1415202008.743:1587): avc:  denied  { getattr } for  pid=2074 comm="python" path="/dev/.udev/db/block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file
----
time->Wed Nov  5 16:40:08 2014
type=SYSCALL msg=audit(1415202008.743:1588): arch=c000003e syscall=2 success=yes exit=6 a0=7fffef0a8e10 a1=0 a2=1b6 a3=0 items=0 ppid=1838 pid=2074 auid=4294967295 uid=175 gid=175 euid=175 suid=175 fsuid=175 egid=175 sgid=175 fsgid=175 tty=(none) ses=4294967295 comm="python" exe="/usr/bin/python" subj=system_u:system_r:rhev_agentd_t:s0 key=(null)
type=AVC msg=audit(1415202008.743:1588): avc:  denied  { open } for  pid=2074 comm="python" name="block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file
type=AVC msg=audit(1415202008.743:1588): avc:  denied  { read } for  pid=2074 comm="python" name="block:sr0" dev=devtmpfs ino=9604 scontext=system_u:system_r:rhev_agentd_t:s0 tcontext=system_u:object_r:udev_tbl_t:s0 tclass=file

Comment 4 Simone Tiraboschi 2014-11-05 17:52:12 UTC
It works as expected disabling SELinux

[ INFO  ] Engine replied: DB Up!Welcome to Health Status!
          Enter the name of the cluster to which you want to add the host (Default) [Default]: 
[ INFO  ] Waiting for the host to become operational in the engine. This may take several minutes...
[ INFO  ] Still waiting for VDSM host to become operational...
[ INFO  ] The VDSM Host is now operational
          Please shutdown the VM allowing the system to launch it as a monitored service.
          The system will wait until the VM is down.
[ INFO  ] Enabling and starting HA services
          Hosted Engine successfully set up
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20141105185052.conf'
[ INFO  ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination

Comment 5 Sandro Bonazzola 2014-11-11 13:17:26 UTC
Since it's a selinux issue, nothing to change in the code on hosted engine side, just need to test once bug 1160808 will be fixed

Comment 6 Artyom 2014-11-24 15:28:55 UTC
Verified on ovirt-hosted-engine-setup-1.2.1-4.el6ev.noarch
Host and RHEVM runs on rhel6.6
engine - rhevm-3.5.0-0.21.el6ev.noarch


Note You need to log in before you can comment on or make changes to this bug.