Bug 1342912

Summary: iSCSI multipath does not log out from from the Default interface
Product: Red Hat Enterprise Virtualization Manager Reporter: Roman Hodain <rhodain>
Component: vdsmAssignee: Maor <mlipchuk>
Status: CLOSED WONTFIX QA Contact: Raz Tamir <ratamir>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.6.6CC: amureini, bazulay, gklein, kshukla, lsurette, mlipchuk, obockows, rhodain, srevivo, tnisan, ycui, ykaul, ylavi
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-27 07:38:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
set_cluster_network_to_be_not_required
none
edit iscsi bond thrugh the GUI none

Description Roman Hodain 2016-06-06 06:09:01 UTC
Description of problem:
As soon as an iSCSI multipath is configured new connections on the hypervisors are created via the respective network interfaces, but the already existing connections remain active.

Version-Release number of selected component (if applicable):
vdsm-4.17.28-0.el7ev

How reproducible:
100%

Steps to Reproduce:
1. Create iSCSI bonding when a hypervisor is in up state

Actual results:
# multipath -ll
3600140517d6124dd63e48b685ea580c1 dm-6 LIO-ORG ,FILEIO          
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  |- 4:0:0:1 sda 8:0  active ready running
  |- 5:0:0:1 sdb 8:16 active ready running
  `- 6:0:0:1 sdc 8:32 active ready running

# iscsiadm -m session -P1 | grep 'Target:\|SID\|Iface Name:'
Target: iqn.2003-01.org.linux-iscsi.hp-dl360g7-1.x8664:sn.5a372904de13 (non-flash)
		Iface Name: default
		SID: 4
		Iface Name: eth1
		SID: 5
		Iface Name: eth2
		SID: 6

Expected results:
The Iface default is automatically logged out. 

Additional info:
There is a way to do this, but it requires exact steps which are not enforced and thus will most probably not followed.

   Create new networks for the iSCSI interfaces.
   Make sure they are not configured for any of the hypervisors
   Put hypervisor to the maintenance mode.
   Configure the networks
   Activate the hypervisor

In case the network is configured for any of the network interfaces during the time of iSCSI multipath creation the iSCSI connection are automatically established and later when the hypervisor is put in the maintenance mode only the iSCSI multipath interfaces are logged out. The default one remains active.

Comment 1 Maor 2016-06-07 12:28:11 UTC
Which networks configured as required in the cluster? Is the default network interface configured as required as well?

Basically, the iSCSI bond feature can only manipulate non-required network interfaces.

Comment 2 Roman Hodain 2016-06-14 14:19:29 UTC
(In reply to Maor from comment #1)
> Which networks configured as required in the cluster? Is the default network
> interface configured as required as well?
> 
> Basically, the iSCSI bond feature can only manipulate non-required network
> interfaces.

As far as I remember the networks were not set as required, but the default one was required.

Comment 5 Maor 2016-12-15 00:46:32 UTC
Created attachment 1231936 [details]
set_cluster_network_to_be_not_required

Comment 6 Maor 2016-12-15 00:47:27 UTC
Created attachment 1231937 [details]
edit iscsi bond thrugh the GUI

Comment 15 Maor 2017-02-14 16:11:24 UTC

> > even tried to configure back the old iscsi bonding settings. Still iscsi
> > default iface having connections.
> > Before doing the manual removal in maintenance mode tuened on, I have
> > compared with our Hypervisor that works fine and found the differences. It
> > seems that the VDSM have ignored the iscsi default iface and its LUNs for
> > whatever reason
> > After the removal of iscsi default iface and its LUN,  everything works as
> > expected including Maintenance mode on and off"

The basic scenario should work.
I've set up a working env with 3 different IPs and tried to reproduce it just to be sure and it seems to work as expected  (see [1]).
Can you reproduce the maintenance mode scenario where you still see the connected network interfaces.

It could be that this Host's network was already connected to those targets without oVirt to connect them and therefore it would not disconnect them once the storage domain moved to maintenance.
If you can reproduce this once again and share the logs I could see whether a disconnect command sent to the Host or not.


[1]
First the storage domain was connected only to the default network:

[root@vm-18-42 ~]# iscsiadm -m session -P1 | grep 'Target:\|SID\|Iface Name:'
Target: iqn.2015-07.com.mlipchuk1.redhat:444 (non-flash)
		Iface Name: default
		SID: 7
Target: iqn.2015-07.com.mlipchuk2.redhat:444 (non-flash)
		Iface Name: default
		SID: 8
Target: iqn.2015-07.com.mlipchuk3.redhat:444 (non-flash)
		Iface Name: default
		SID: 9

I configured an iSCSI bond with two non-required networks and moved the storage domain to maintenance.
Once the storage domain moved to maintenance this is how the connected networks looked like:

[root@vm-18-42 ~]# iscsiadm -m session -P1 | grep 'Target:\|SID\|Iface Name:'
Target: iqn.2015-07.com.mlipchuk1.redhat:444 (non-flash)
		Iface Name: enp6s0
		SID: 10
		Iface Name: ens2f0
		SID: 13
Target: iqn.2015-07.com.mlipchuk2.redhat:444 (non-flash)
		Iface Name: enp6s0
		SID: 11
		Iface Name: ens2f0
		SID: 14
Target: iqn.2015-07.com.mlipchuk3.redhat:444 (non-flash)
		Iface Name: enp6s0
		SID: 12
		Iface Name: ens2f0
		SID: 15(In reply to Roman Hodain from comment #11)

Comment 18 Yaniv Lavi 2017-02-23 11:25:28 UTC
Moving out all non blocker\exceptions.

Comment 21 Maor 2017-02-27 07:38:42 UTC
Regarding the disconnect scenario that was mentioned in comment 12, it seems that  the scenario should be valid based on the reproduce scenario which I described in comment 15.
If this will be reproduced by the customer it will be better to dedicate a new bug to investigate it.

Based on comment 19 and since we already have the warning message, I think we can close this bug.
Please feel free to re-open it if you think otherwise