Bug 1551291

Summary: [ansible] SHE iSCSI deployment fails with "[ ERROR ] Unable to get target list".
Product: [oVirt] ovirt-hosted-engine-setup Reporter: Nikolai Sednev <nsednev>
Component: GeneralAssignee: Simone Tiraboschi <stirabos>
Status: CLOSED CURRENTRELEASE QA Contact: Nikolai Sednev <nsednev>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 2.2.11CC: bugs, cshao, nsednev, weiwang, ycui, yzhao
Target Milestone: ovirt-4.2.2Keywords: Regression, TestOnly, Triaged
Target Release: ---Flags: sbonazzo: ovirt-4.2?
sbonazzo: blocker?
nsednev: planning_ack?
rule-engine: devel_ack+
mavital: testing_ack+
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-03-29 11:11:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Integration RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1522737, 1534212    
Attachments:
Description Flags
sosreport from alma03 none

Description Nikolai Sednev 2018-03-04 13:36:30 UTC
Created attachment 1403776 [details]
sosreport from alma03

Description of problem:
SHE iSCSI deployment fails with "[ ERROR ] Unable to get target list".


alma03 ~]# hosted-engine --deploy --ansible
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          During customization use CTRL-D to abort.
          Continuing will configure this host for serving as hypervisor and create a local VM with a running engine.
          The locally running engine will be used to configure a storage domain and create a VM there.
          At the end the disk of the local VM will be moved to the shared storage.
          Are you sure you want to continue? (Yes, No)[Yes]: 
          It has been detected that this program is executed through an SSH connection without using screen.
          Continuing with the installation may lead to broken installation if the network connection fails.
          It is highly recommended to abort the installation and run it inside a screen session using command "screen".
          Do you want to continue anyway? (Yes, No)[No]: yes
          Configuration files: []
          Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180304144629-jaqa2m.log
          Version: otopi-1.7.7 (otopi-1.7.7-1.el7ev)
[ INFO  ] Stage: Environment packages setup
[ INFO  ] Stage: Programs detection
[ INFO  ] Stage: Environment setup
[ INFO  ] Stage: Environment customization
         
          --== STORAGE CONFIGURATION ==--
         
         
          --== HOST NETWORK CONFIGURATION ==--
         
          Please indicate a pingable gateway IP address [10.35.95.254]: 
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Get all active network interfaces]
[ INFO  ] TASK [filter bonds with bad naming]
          Please indicate a nic to set ovirtmgmt bridge on: (enp5s0f0) [enp5s0f0]: 
         
          --== VM CONFIGURATION ==--
         
          If you want to deploy with a custom engine appliance image,
          please specify the path to the OVA archive you would like to use
          (leave it empty to skip, the setup will use rhvm-appliance rpm installing it if missing): 
[ INFO  ] Detecting host timezone.
          Please provide the FQDN you would like to use for the engine appliance.
          Note: This will be the FQDN of the engine VM you are now going to launch,
          it should not point to the base host or to any other existing machine.
          Engine VM FQDN: (leave it empty to skip):  []: nsednev-he-1.qa.lab.tlv.redhat.com
          Please provide the domain name you would like to use for the engine appliance.
          Engine VM domain: [qa.lab.tlv.redhat.com]
          Automatically execute engine-setup on the engine appliance on first boot (Yes, No)[Yes]? 
          Automatically restart the engine VM as a monitored service after engine-setup (Yes, No)[Yes]? 
          Enter root password that will be used for the engine appliance: 
          Confirm appliance root password: 
          Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): 
[WARNING] Skipping appliance root ssh public key
          Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: 
          Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: 
          Please specify the memory size of the VM in MB (Defaults to appliance OVF value): [16384]: 
          You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:49:2b:15]: 00:16:3e:7b:b8:53
          How should the engine VM network be configured (DHCP, Static)[DHCP]? 
          Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
          Note: ensuring that this host could resolve the engine VM hostname is still up to you
          (Yes, No)[No] 
         
          --== HOSTED ENGINE CONFIGURATION ==--
         
          Please provide the name of the SMTP server through which we will send notifications [localhost]: 
          Please provide the TCP port number of the SMTP server [25]: 
          Please provide the email address from which notifications will be sent [root@localhost]: 
          Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: 
          Enter engine admin password: 
          Confirm engine admin password: 
[ INFO  ] Stage: Setup validation
[ INFO  ] Stage: Transaction setup
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Package installation
[ INFO  ] Stage: Misc configuration
[ INFO  ] Stage: Transaction commit
[ INFO  ] Stage: Closing up
[ INFO  ] Cleaning previous attempts
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Stop libvirt]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Drop vdsm config statements]
[ INFO  ] TASK [Restore initial abrt config files]
[ INFO  ] TASK [Restart abrtd]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Drop libvirt sasl2 configuration by vdsm]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Stop and disable services]
[ INFO  ] TASK [Start libvirt]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Check for leftover local engine VM]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Destroy leftover local engine VM]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Check for leftover defined local engine VM]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Undefine leftover local engine VM]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Remove eventually entries for the local VM from known_hosts file]
[ INFO  ] ok: [localhost]
[ INFO  ] Starting local VM
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Create dir for local vm]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Set local vm dir path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fix local vm dir permission]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Start libvirt]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Check status of default libvirt network]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Activate default libvirt network]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Install rhvm-appliance rpm]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Parse appliance configuration for path]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Parse appliance configuration for sha1sum]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Get OVA path]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Compute sha1sum]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Compare sha1sum]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Register appliance PATH]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Extract appliance to local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Find the appliance image]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Get appliance disk size]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Parse qemu-img output]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Create cloud init user-data and meta-data files]
[ INFO  ] TASK [Create iso disk]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Create local vm]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Get local vm ip]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Remove eventually entries for the local VM from /etc/hosts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Create an entry in /etc/hosts for the local VM]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for ssh to restart on the engine VM]
[ INFO  ] ok: [localhost -> localhost]
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Wait for the engine VM]
[ INFO  ] ok: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Add an entry for this host on /etc/hosts on the engine VM]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Set FDQN]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Force the engine VM FDQN to resolve on 127.0.0.1]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Restore sshd reverse DNS lookups]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Generate an answer file for engine-setup]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Include before engine-setup custom tasks files for the engine VM]
[ INFO  ] TASK [Execute engine-setup]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Include before engine-setup custom tasks files for the engine VM]
[ INFO  ] TASK [Configure LibgfApi support]
[ INFO  ] skipping: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Restart the engine for LibgfApi support]
[ INFO  ] skipping: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Mask services to speed up future bootstraps]
[ INFO  ] TASK [Clean up boostrap answer file]
[ INFO  ] changed: [nsednev-he-1.qa.lab.tlv.redhat.com]
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Wait for engine to start]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Detect VLAN ID]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Set engine pub key as authorized key without validating the TLS/SSL certificates]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Set VLAN ID at datacenter level]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Force host-deploy in offline mode]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Add host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for the engine to start host install process]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Wait for the management bridge to appear on the host]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [Wait for the host to be up]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Check host install result]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [Remove host-deploy configuration file]
[ INFO  ] changed: [localhost]
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: iscsi
          Please specify the iSCSI portal IP address: 10.35.146.129
          Please specify the iSCSI portal port [3260]: 
          Please specify the iSCSI discover user: 
          Please specify the iSCSI discover password: 
          Please specify the iSCSI portal login user: 
          Please specify the iSCSI portal login password: 
[ INFO  ] Discovering iSCSI targets
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Prepare iSCSI parameters]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fetch host facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [iSCSI discover with REST API]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "connection": "close", "content": "{\n  \"detail\" : \"Network error during communication with the Host.\",\n  \"reason\" : \"Operation Failed\"\n}", "content_encoding": "identity", "content_type": "application/json", "correlation_id": "844cc547-b4c4-405d-bef3-c4bc3f24d076", "date": "Sun, 04 Mar 2018 13:01:15 GMT", "json": {"detail": "Network error during communication with the Host.", "reason": "Operation Failed"}, "msg": "Status code was not [200]: HTTP Error 400: Bad Request", "redirected": false, "server": "Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips", "status": 400, "transfer_encoding": "chunked", "url": "https://nsednev-he-1.qa.lab.tlv.redhat.com/ovirt-engine/api/hosts/f97ea61a-5ed0-4640-91de-fff88ed2c6fe/iscsidiscover"}
[ ERROR ] Unable to get target list
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: iscsi
          Please specify the iSCSI portal IP address: 10.35.146.129
          Please specify the iSCSI portal port [3260]: 
          Please specify the iSCSI discover user: 
          Please specify the iSCSI discover password: 
          Please specify the iSCSI portal login user: 
          Please specify the iSCSI portal login password: 
[ INFO  ] Discovering iSCSI targets
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Prepare iSCSI parameters]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fetch host facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [iSCSI discover with REST API]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "connection": "close", "content": "{\n  \"detail\" : \"Network error during communication with the Host.\",\n  \"reason\" : \"Operation Failed\"\n}", "content_encoding": "identity", "content_type": "application/json", "correlation_id": "2bd27ad5-93b4-4e2e-8b17-c0373246d310", "date": "Sun, 04 Mar 2018 13:05:05 GMT", "json": {"detail": "Network error during communication with the Host.", "reason": "Operation Failed"}, "msg": "Status code was not [200]: HTTP Error 400: Bad Request", "redirected": false, "server": "Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips", "status": 400, "transfer_encoding": "chunked", "url": "https://nsednev-he-1.qa.lab.tlv.redhat.com/ovirt-engine/api/hosts/f97ea61a-5ed0-4640-91de-fff88ed2c6fe/iscsidiscover"}
[ ERROR ] Unable to get target list
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: 

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-setup-2.2.11-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.6-1.el7ev.noarch
Linux 3.10.0-858.el7.x86_64 #1 SMP Tue Feb 27 08:59:23 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)
rhvm-appliance-4.2-20180202.0.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1.Deploy SHE over iSCSI.

Actual results:
Deployment fails.

Expected results:
Deployment should succeed.

Additional info:
Sosreport from host attached.

Comment 1 Nikolai Sednev 2018-03-04 13:37:03 UTC
Connectivity between the host and the storage is fine:
alma03 ~]# iscsiadm -m discovery -s st -p 10.35.146.129 
iscsiadm: discovery mode: option '-s' is not allowed/supported 
[root@alma03 ~]# iscsiadm -m discovery -t st -p 10.35.146.129  
10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 
10.35.146.161:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01 
10.35.146.193:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04 
10.35.146.225:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05 
[root@alma03 ~]# iscsiadm -m node -l 
Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal: 10.35.146.225,3260] (multiple) 
Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01, portal: 10.35.146.161,3260] (multiple) 
Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04, portal: 10.35.146.193,3260] (multiple) 
Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00, portal: 10.35.146.129,3260] (multiple) 
Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal: 10.35.146.225,3260] successful. 
Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01, portal: 10.35.146.161,3260] successful. 
Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04, portal: 10.35.146.193,3260] successful. 
Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00, portal: 10.35.146.129,3260] successful. 
[root@alma03 ~]# multipath -ll 
3514f0c5a51601629 dm-0 XtremIO ,XtremApp         
size=55G features='0' hwhandler='0' wp=rw 
`-+- policy='queue-length 0' prio=1 status=active 
 |- 7:0:0:1 sdc 8:32 active ready running 
 |- 6:0:0:1 sdb 8:16 active ready running 
 |- 8:0:0:1 sdd 8:48 active ready running 
 `- 9:0:0:1 sde 8:64 active ready running

Comment 2 Yaniv Kaul 2018-03-04 13:53:27 UTC
(In reply to Nikolai Sednev from comment #1)
> Connectivity between the host and the storage is fine:
> alma03 ~]# iscsiadm -m discovery -s st -p 10.35.146.129 
> iscsiadm: discovery mode: option '-s' is not allowed/supported 
> [root@alma03 ~]# iscsiadm -m discovery -t st -p 10.35.146.129  
> 10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 
> 10.35.146.161:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01 
> 10.35.146.193:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04 
> 10.35.146.225:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05 
> [root@alma03 ~]# iscsiadm -m node -l 
> Logging in to [iface: default, target:
> iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal:
> 10.35.146.225,3260] (multiple) 

Where's the login? Were you already connected?

Comment 3 Nikolai Sednev 2018-03-04 14:11:31 UTC
(In reply to Yaniv Kaul from comment #2)
> (In reply to Nikolai Sednev from comment #1)
> > Connectivity between the host and the storage is fine:
> > alma03 ~]# iscsiadm -m discovery -s st -p 10.35.146.129 
> > iscsiadm: discovery mode: option '-s' is not allowed/supported 
> > [root@alma03 ~]# iscsiadm -m discovery -t st -p 10.35.146.129  
> > 10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 
> > 10.35.146.161:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01 
> > 10.35.146.193:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04 
> > 10.35.146.225:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05 
> > [root@alma03 ~]# iscsiadm -m node -l 
> > Logging in to [iface: default, target:
> > iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal:
> > 10.35.146.225,3260] (multiple) 
> 
> Where's the login? Were you already connected?

This was tested from host manually towards the storage, not using deployment itself.

Comment 4 Simone Tiraboschi 2018-03-05 09:05:54 UTC
"json": {"detail": "Network error during communication with the Host.", "reason": "Operation Failed"},

Also this one is probably just a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1549642

Nikolai, could you please try reproducing this on an host configured with a static IP?

Comment 5 Nikolai Sednev 2018-03-05 15:58:23 UTC
Happens also on statically configured host's NIC:
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 12, "changed": false}
[ INFO  ] TASK [Check host install result]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fix network configuration if the host is still not up"}
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}

Comment 6 Nikolai Sednev 2018-03-05 16:28:19 UTC
Comment #5 was tested on these components:
ovirt-hosted-engine-setup-2.2.12-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.6-1.el7ev.noarch
rhvm-appliance-4.2-20180202.0.el7.noarch
Linux 3.10.0-858.el7.x86_64 #1 SMP Tue Feb 27 08:59:23 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Comment 7 Simone Tiraboschi 2018-03-08 16:51:33 UTC
Moving to MODIFIED as a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1549642

Comment 8 Sandro Bonazzola 2018-03-16 15:23:20 UTC
(In reply to Simone Tiraboschi from comment #7)
> Moving to MODIFIED as a side effect of
> https://bugzilla.redhat.com/show_bug.cgi?id=1549642

Marking as test only for hosted engine setup.

Comment 9 Nikolai Sednev 2018-03-18 13:26:48 UTC
Works for me on these components:
ovirt-hosted-engine-ha-2.2.7-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.13-1.el7ev.noarch
rhvm-appliance-4.2-20180202.0.el7.noarch
Linux 3.10.0-861.el7.x86_64 #1 SMP Wed Mar 14 10:21:01 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Deployment worked fine over iSCSI storage.

Comment 10 Sandro Bonazzola 2018-03-29 11:11:50 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.