Bug 1560655 - Node 0 flow is consuming the portal IP used for the discovery.
Summary: Node 0 flow is consuming the portal IP used for the discovery.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: General
Version: 2.2.14
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ovirt-4.2.2
: 2.2.16
Assignee: Ido Rosenzwig
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks: 1455169 1458709
TreeView+ depends on / blocked
 
Reported: 2018-03-26 16:25 UTC by Nikolai Sednev
Modified: 2018-04-18 12:26 UTC (History)
7 users (show)

Fixed In Version: ovirt-hosted-engine-setup-2.2.16-1.el7ev
Clone Of:
Environment:
Last Closed: 2018-04-18 12:26:53 UTC
oVirt Team: Integration
Embargoed:
sbonazzo: ovirt-4.2?
nsednev: planning_ack?
sbonazzo: devel_ack+
mavital: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 89477 0 master MERGED ansible: iscsi: get portal IP after target selection 2018-09-03 09:34:08 UTC
oVirt gerrit 89535 0 ovirt-hosted-engine-setup-2.2 MERGED ansible: iscsi: get portal IP after target selection 2018-03-28 08:46:36 UTC

Description Nikolai Sednev 2018-03-26 16:25:26 UTC
Description of problem:
This bug was opened from findings in https://bugzilla.redhat.com/show_bug.cgi?id=1559328.

The issue is that ansible flow is consuming the portal IP used for the discovery.

I'm using one of 4 IPs of iSCSI targets for initial iSCSI discovery and then using another IP from 4 available for actual deployment. On Node 0 this is not working properly and the used target of 10.35.146.129:3260 being used as deployment target, although I've intentionally defined 10.35.146.225:3260 for deployment in CLI.
[ INFO  ] changed: [localhost]
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: iscsi
          Please specify the iSCSI portal IP address: 10.35.146.129
          Please specify the iSCSI portal port [3260]: 
          Please specify the iSCSI discover user: 
          Please specify the iSCSI discover password: 
          Please specify the iSCSI portal login user: 
          Please specify the iSCSI portal login password: 
[ INFO  ] Discovering iSCSI targets
[ INFO  ] TASK [Gathering Facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Obtain SSO token using username/password credentials]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Prepare iSCSI parameters]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [Fetch host facts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [iSCSI discover with REST API]
[ INFO  ] ok: [localhost]
          The following targets have been found:
                [1]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05
                        TPGT: 1, portals:
                                10.35.146.225:3260
         
                [2]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04
                        TPGT: 1, portals:
                                10.35.146.193:3260
         
                [3]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01
                        TPGT: 1, portals:
                                10.35.146.161:3260
         
                [4]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00
                        TPGT: 1, portals:
                                10.35.146.129:3260
         
          Please select a target (1, 2, 3, 4) [1]: 


On vintage the same flow is working just fine.

There is inconsistency between how vintage being deployed over iSCSI vs. Node 0.


See here that vintage is working just fine with the same configuration:
         Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]: iscsi
          Please specify the iSCSI portal IP address: 10.35.146.129
          Please specify the iSCSI portal port [3260]: 
          Please specify the iSCSI portal user: 
          The following targets have been found:
                [1]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05
                        TPGT: 1, portals:
                                10.35.146.225:3260
         
                [2]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04
                        TPGT: 1, portals:
                                10.35.146.193:3260
         
                [3]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01
                        TPGT: 1, portals:
                                10.35.146.161:3260
         
                [4]     iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00
                        TPGT: 1, portals:
                                10.35.146.129:3260
         
          Please select a target (1, 2, 3, 4) [1]: 
[ INFO  ] Connecting to the storage server
          The following luns have been found on the requested target:
                [1]     LUN1    3514f0c5a516016d9       70GiB   XtremIO XtremApp
                        status: free, paths: 1 active
         
                [2]     LUN2    3514f0c5a516016da       80GiB   XtremIO XtremApp
                        status: free, paths: 1 active
         
          Please select the destination LUN (1, 2) [1]: 2
.
.
.
[ INFO  ] Engine-setup successfully completed 


Version-Release number of selected component (if applicable):
ovirt-hosted-engine-setup-2.2.14-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.7-1.el7ev.noarch
Linux 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

How reproducible:
100%

Steps to Reproduce:
1.Have at least 2 portals with different IP addresses on iSCSI storage with the same exposed LUN.
2.Deploy Node 0 over iSCSI, while choosing first portal's IP for discovery and second portal's IP for actual deployment.

Actual results:
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."}
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: 


Expected results:
Deployment should succeed, regardless of discovery portal was used.

Additional info:
Please see https://bugzilla.redhat.com/show_bug.cgi?id=1559328 for more details.

Comment 1 Simone Tiraboschi 2018-03-27 07:57:17 UTC
Nikolai, could you please check if it happens also deploying from cockpit?

Comment 2 Nikolai Sednev 2018-03-27 09:09:27 UTC
(In reply to Simone Tiraboschi from comment #1)
> Nikolai, could you please check if it happens also deploying from cockpit?

Yes, it happens.

Comment 3 Simone Tiraboschi 2018-03-28 09:36:28 UTC
(In reply to Nikolai Sednev from comment #2)
> (In reply to Simone Tiraboschi from comment #1)
> > Nikolai, could you please check if it happens also deploying from cockpit?
> 
> Yes, it happens.

Philip, I think we have to apply the same fix also to the cockpit wizard.

Comment 4 Phillip Bailey 2018-03-28 21:49:25 UTC
(In reply to Simone Tiraboschi from comment #3)
> Philip, I think we have to apply the same fix also to the cockpit wizard.

Simone, based on our conversation from earlier today, there shouldn't be any changes required on the cockpit side since the portal IP used for discovery is passed using a separate variable than the portal IPs reported by the chosen target.

Comment 5 Nikolai Sednev 2018-04-08 13:05:47 UTC
Not being reproduced on these components:
ovirt-hosted-engine-setup-2.2.16-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.10-1.el7ev.noarch
rhvm-appliance-4.2-20180404.0.el7.noarch
Linux 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Moving to verified.

Comment 6 Sandro Bonazzola 2018-04-18 12:26:53 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.