Bug 1551291
Summary: | [ansible] SHE iSCSI deployment fails with "[ ERROR ] Unable to get target list". | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-setup | Reporter: | Nikolai Sednev <nsednev> | ||||
Component: | General | Assignee: | Simone Tiraboschi <stirabos> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | Nikolai Sednev <nsednev> | ||||
Severity: | urgent | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 2.2.11 | CC: | bugs, cshao, nsednev, weiwang, ycui, yzhao | ||||
Target Milestone: | ovirt-4.2.2 | Keywords: | Regression, TestOnly, Triaged | ||||
Target Release: | --- | Flags: | sbonazzo:
ovirt-4.2?
sbonazzo: blocker? nsednev: planning_ack? rule-engine: devel_ack+ mavital: testing_ack+ |
||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2018-03-29 11:11:50 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1522737, 1534212 | ||||||
Attachments: |
|
Description
Nikolai Sednev
2018-03-04 13:36:30 UTC
Connectivity between the host and the storage is fine: alma03 ~]# iscsiadm -m discovery -s st -p 10.35.146.129 iscsiadm: discovery mode: option '-s' is not allowed/supported [root@alma03 ~]# iscsiadm -m discovery -t st -p 10.35.146.129 10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 10.35.146.161:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01 10.35.146.193:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04 10.35.146.225:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05 [root@alma03 ~]# iscsiadm -m node -l Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal: 10.35.146.225,3260] (multiple) Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01, portal: 10.35.146.161,3260] (multiple) Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04, portal: 10.35.146.193,3260] (multiple) Logging in to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00, portal: 10.35.146.129,3260] (multiple) Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal: 10.35.146.225,3260] successful. Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01, portal: 10.35.146.161,3260] successful. Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04, portal: 10.35.146.193,3260] successful. Login to [iface: default, target: iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00, portal: 10.35.146.129,3260] successful. [root@alma03 ~]# multipath -ll 3514f0c5a51601629 dm-0 XtremIO ,XtremApp size=55G features='0' hwhandler='0' wp=rw `-+- policy='queue-length 0' prio=1 status=active |- 7:0:0:1 sdc 8:32 active ready running |- 6:0:0:1 sdb 8:16 active ready running |- 8:0:0:1 sdd 8:48 active ready running `- 9:0:0:1 sde 8:64 active ready running (In reply to Nikolai Sednev from comment #1) > Connectivity between the host and the storage is fine: > alma03 ~]# iscsiadm -m discovery -s st -p 10.35.146.129 > iscsiadm: discovery mode: option '-s' is not allowed/supported > [root@alma03 ~]# iscsiadm -m discovery -t st -p 10.35.146.129 > 10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 > 10.35.146.161:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01 > 10.35.146.193:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04 > 10.35.146.225:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05 > [root@alma03 ~]# iscsiadm -m node -l > Logging in to [iface: default, target: > iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal: > 10.35.146.225,3260] (multiple) Where's the login? Were you already connected? (In reply to Yaniv Kaul from comment #2) > (In reply to Nikolai Sednev from comment #1) > > Connectivity between the host and the storage is fine: > > alma03 ~]# iscsiadm -m discovery -s st -p 10.35.146.129 > > iscsiadm: discovery mode: option '-s' is not allowed/supported > > [root@alma03 ~]# iscsiadm -m discovery -t st -p 10.35.146.129 > > 10.35.146.129:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c00 > > 10.35.146.161:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c01 > > 10.35.146.193:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c04 > > 10.35.146.225:3260,1 iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05 > > [root@alma03 ~]# iscsiadm -m node -l > > Logging in to [iface: default, target: > > iqn.2008-05.com.xtremio:xio00153500071-514f0c50023f6c05, portal: > > 10.35.146.225,3260] (multiple) > > Where's the login? Were you already connected? This was tested from host manually towards the storage, not using deployment itself. "json": {"detail": "Network error during communication with the Host.", "reason": "Operation Failed"}, Also this one is probably just a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1549642 Nikolai, could you please try reproducing this on an host configured with a static IP? Happens also on statically configured host's NIC: [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 12, "changed": false} [ INFO ] TASK [Check host install result] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fix network configuration if the host is still not up"} [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."} Comment #5 was tested on these components: ovirt-hosted-engine-setup-2.2.12-1.el7ev.noarch ovirt-hosted-engine-ha-2.2.6-1.el7ev.noarch rhvm-appliance-4.2-20180202.0.el7.noarch Linux 3.10.0-858.el7.x86_64 #1 SMP Tue Feb 27 08:59:23 EST 2018 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.5 (Maipo) Moving to MODIFIED as a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1549642 (In reply to Simone Tiraboschi from comment #7) > Moving to MODIFIED as a side effect of > https://bugzilla.redhat.com/show_bug.cgi?id=1549642 Marking as test only for hosted engine setup. Works for me on these components: ovirt-hosted-engine-ha-2.2.7-1.el7ev.noarch ovirt-hosted-engine-setup-2.2.13-1.el7ev.noarch rhvm-appliance-4.2-20180202.0.el7.noarch Linux 3.10.0-861.el7.x86_64 #1 SMP Wed Mar 14 10:21:01 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux Red Hat Enterprise Linux Server release 7.5 (Maipo) Deployment worked fine over iSCSI storage. This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |