Bug 1588741 - iscsi (multipath) tries to login to all path by every interfaces
Summary: iscsi (multipath) tries to login to all path by every interfaces
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.2.3
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Dan Kenigsberg
QA Contact: Avihai
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-07 19:32 UTC by dearfriend
Modified: 2020-08-03 15:34 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-01 14:47:55 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)
vdsm.log (7.39 KB, text/plain)
2018-06-08 09:26 UTC, dearfriend
no flags Details

Description dearfriend 2018-06-07 19:32:00 UTC
Description of problem:

iscsi (multipath) tries to login to all path by every interfaces, while each path can be accessed by one.

Version-Release number of selected component (if applicable): 4.2.3.4

How reproducible:

Steps to Reproduce:
1. Get storage with 2 interfaces.
2. Add new iscsi storage, login to all paths.
3. Add iscsi bonding (multipathing)

Actual results:
Hosts in cluster goes to non "Up" status.

Expected results:
New storage works with multipath bonding.

Additional info:
There is information in vdsm.log, that hosts tries to connect to every path by all interfaces (included in 3.)

Comment 1 Yaniv Kaul 2018-06-07 21:03:21 UTC
Logs are missing.
It's also unclear what the issue is. Why wouldn't it try from all interfaces which have iscsi bonding configured? How can we know which target is available and which isn't?
Lastly, severity is not set.

Comment 2 dearfriend 2018-06-08 09:26:38 UTC
Created attachment 1449033 [details]
vdsm.log

Comment 3 dearfriend 2018-06-08 09:35:14 UTC
More detail.
2 interfaces:
- eth3.15 10.0.1.0/24  target 10.0.1.15
- eth4.16 10.0.2.0/24  target 10.0.2.16

As you can see in vdsm.log, rhvh tries to connect to
1. eth3.15 10.0.1.15
2. eth3.15 10.0.2.16
3. eth4.16 10.0.1.15
4. eth4.16 10.0.2.16

2 and 3 will always fail. And result is "iscsiadm: Could not log into all portals"

There is no options in "storage" or "iscsi multipathing" to specify interface/path.

Comment 4 Yaniv Kaul 2018-06-11 13:02:34 UTC
(In reply to dearfriend from comment #3)
> More detail.
> 2 interfaces:
> - eth3.15 10.0.1.0/24  target 10.0.1.15
> - eth4.16 10.0.2.0/24  target 10.0.2.16
> 
> As you can see in vdsm.log, rhvh tries to connect to
> 1. eth3.15 10.0.1.15
> 2. eth3.15 10.0.2.16
> 3. eth4.16 10.0.1.15
> 4. eth4.16 10.0.2.16
> 
> 2 and 3 will always fail. And result is "iscsiadm: Could not log into all
> portals"

Are all IPs 'published' via all portals?

> 
> There is no options in "storage" or "iscsi multipathing" to specify
> interface/path.

Correct.

Comment 5 dearfriend 2018-06-11 13:07:42 UTC
> Are all IPs 'published' via all portals?
Yes

Comment 7 Yaniv Kaul 2018-06-13 09:30:36 UTC
(In reply to dearfriend from comment #5)
> > Are all IPs 'published' via all portals?
> Yes

So how do we know to which we should try and to which we shouldn't login?

Comment 8 dearfriend 2018-06-13 09:48:28 UTC
I see 2 ways:

1 - routing table (Are there reasons to use the same path by different interfaces? )
2 - Add options to "iscsi multipathing" to specify interface/path.

Comment 9 Yaniv Kaul 2018-06-13 09:55:54 UTC
(In reply to dearfriend from comment #8)
> I see 2 ways:
> 
> 1 - routing table (Are there reasons to use the same path by different
> interfaces? )

Yes, of course. Redundancy.

> 2 - Add options to "iscsi multipathing" to specify interface/path.

That's doable, but that's a feature request, not a bug.

Comment 13 Sandro Bonazzola 2019-01-28 09:42:14 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 15 Vinícius Ferrão 2019-02-08 03:30:28 UTC
Yaniv, I think it’s the same bug, all over again...

iSCSI in oVirt/RHV is just unusable. This sounds a little harsh, but it’s true. All the MPIO baggage that I and other folks have are completely useless and wrong in oVirt/RHV.

Take a look at this 2yr old bug: https://bugzilla.redhat.com/show_bug.cgi?id=1474904

Maor tried to help/understand but we can’t solve this.

I simply gave up and used NFS instead, or changed the hypervisor where iSCSI with MPIO was mandatory.

Comment 17 Yee Minn Han 2019-03-20 16:03:27 UTC
I want ti know how to add iscsi storage with command line in rhev-manager? I can't add iscsi storage with GUI.There are no LUNs on target.( I tested on Window server and I found LUN on that target) .I think RHEV-Manager GUI got some problems .

Comment 18 Michal Skrivanek 2020-03-18 15:46:57 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 19 Michal Skrivanek 2020-03-18 15:51:41 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 20 Vinícius Ferrão 2020-03-18 16:04:52 UTC
Michal, this is broken software. The same issue is stated here: https://bugzilla.redhat.com/show_bug.cgi?id=1474904

I know that I'm extremely boring complaining about this, but I don't believe that no one in Red Hat really understands what's happening with this issues, for years...

It's broken software. With broken or, at best, misleading functionality.

Comment 21 Michal Skrivanek 2020-04-01 14:47:55 UTC
ok, closing. Please reopen if still relevant/you want to work on it.

Comment 22 Michal Skrivanek 2020-04-01 14:51:18 UTC
ok, closing. Please reopen if still relevant/you want to work on it.


Note You need to log in before you can comment on or make changes to this bug.