Bug 1094025 - [vdsm] [iSCSI multipath] vdsm fails to connect to storage server with "IscsiNodeError" when replacing the networks that participate in an iSCSI multipath bond
Summary: [vdsm] [iSCSI multipath] vdsm fails to connect to storage server with "IscsiN...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.4.0
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.4.3
Assignee: Maor
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks: rhev3.5beta 1156165
TreeView+ depends on / blocked
 
Reported: 2014-05-04 13:02 UTC by Elad
Modified: 2016-02-10 19:29 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-05 12:12:56 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
amureini: Triaged+


Attachments (Terms of Use)
logs from engine and host (307.05 KB, application/x-gzip)
2014-05-04 13:02 UTC, Elad
no flags Details

Description Elad 2014-05-04 13:02:32 UTC
Created attachment 892282 [details]
logs from engine and host

Description of problem:
I tried to edit an iSCSI multipath bond, and replace its attached networks. VDSM failed to perform the operation.

Version-Release number of selected component (if applicable):
AV7
vdsm-4.14.7-0.1.beta3.el6ev.x86_64
rhevm-3.4.0-0.15.beta3.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
On a shared DC with active iSCSI storage domain(s):
1. Create 3 new networks and attach them to the cluster with required check-box checked
2. Attach the networks to the cluster's hosts NICs 
3. Create a new iSCSI multipath bond (under DC tab -> pick the relevant DC -> iSCSI multipath sub-tab -> new) and add 2 of the new networks along which the targets to it
4. Maintenance the iSCSI domain and activate it so the connection to the storage will be done from the new networks
5. After the iSCSI domain is active, edit the multipath bond, uncheck the checked networks an pick the third network. Click 'Ok'

Actual results:
VDSM fails to perform the connection to the storage server via the new network. Failure in vdsm.log:

Thread-1846::ERROR::2014-05-04 15:40:44,971::hsm::2379::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2376, in connectStorageServer
    conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 359, in connect
    iscsi.addIscsiNode(self._iface, self._target, self._cred)
  File "/usr/share/vdsm/storage/iscsi.py", line 166, in addIscsiNode
    iscsiadm.node_login(iface.name, portalStr, targetName)
  File "/usr/share/vdsm/storage/iscsiadm.py", line 295, in node_login
    raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: eth0.1, target: iqn.2008-05.com.xtremio:001e675b8ee1, portal: 10.35.160.3,3260] (multiple)'], ['iscsiadm: Could not login to [iface:
eth0.1, target: iqn.2008-05.com.xtremio:001e675b8ee1, portal: 10.35.160.3,3260].', 'iscsiadm: initiator reported error (8 - connection timed out)', 'iscsiadm: Could not log into
 all portals'])

Expected results:
Editing the iSCSI multipath bond and replacing its attached networks should succeed

Additional info: logs from engine and host

Comment 1 Sergey Gotliv 2014-05-05 09:15:47 UTC
Why this is the bug? What VDSM can do if the underlying iSCSI layer can't connect to the specified target via specified iface.

Comment 2 Elad 2014-05-05 13:07:07 UTC
(In reply to Sergey Gotliv from comment #1)
> Why this is the bug? What VDSM can do if the underlying iSCSI layer can't
> connect to the specified target via specified iface.

vdsm fails to connect to the storage server with the mentioned error "IscsiNodeError". This failure occurs only when vdsm tried to connect to the storage server while replacing the network in the bond. Regular connectStorageServer which performed by connecting via the same path while creating a storage domain succeeds.

Comment 5 Maor 2014-08-05 12:12:56 UTC
This is a configuration issue with iSCSI bond.
Closing it with reason not a bug after discussed with Elad


Note You need to log in before you can comment on or make changes to this bug.