Description of problem: iscsi (multipath) tries to login to all path by every interfaces, while each path can be accessed by one. Version-Release number of selected component (if applicable): 4.2.3.4 How reproducible: Steps to Reproduce: 1. Get storage with 2 interfaces. 2. Add new iscsi storage, login to all paths. 3. Add iscsi bonding (multipathing) Actual results: Hosts in cluster goes to non "Up" status. Expected results: New storage works with multipath bonding. Additional info: There is information in vdsm.log, that hosts tries to connect to every path by all interfaces (included in 3.)
Logs are missing. It's also unclear what the issue is. Why wouldn't it try from all interfaces which have iscsi bonding configured? How can we know which target is available and which isn't? Lastly, severity is not set.
Created attachment 1449033 [details] vdsm.log
More detail. 2 interfaces: - eth3.15 10.0.1.0/24 target 10.0.1.15 - eth4.16 10.0.2.0/24 target 10.0.2.16 As you can see in vdsm.log, rhvh tries to connect to 1. eth3.15 10.0.1.15 2. eth3.15 10.0.2.16 3. eth4.16 10.0.1.15 4. eth4.16 10.0.2.16 2 and 3 will always fail. And result is "iscsiadm: Could not log into all portals" There is no options in "storage" or "iscsi multipathing" to specify interface/path.
(In reply to dearfriend from comment #3) > More detail. > 2 interfaces: > - eth3.15 10.0.1.0/24 target 10.0.1.15 > - eth4.16 10.0.2.0/24 target 10.0.2.16 > > As you can see in vdsm.log, rhvh tries to connect to > 1. eth3.15 10.0.1.15 > 2. eth3.15 10.0.2.16 > 3. eth4.16 10.0.1.15 > 4. eth4.16 10.0.2.16 > > 2 and 3 will always fail. And result is "iscsiadm: Could not log into all > portals" Are all IPs 'published' via all portals? > > There is no options in "storage" or "iscsi multipathing" to specify > interface/path. Correct.
> Are all IPs 'published' via all portals? Yes
(In reply to dearfriend from comment #5) > > Are all IPs 'published' via all portals? > Yes So how do we know to which we should try and to which we shouldn't login?
I see 2 ways: 1 - routing table (Are there reasons to use the same path by different interfaces? ) 2 - Add options to "iscsi multipathing" to specify interface/path.
(In reply to dearfriend from comment #8) > I see 2 ways: > > 1 - routing table (Are there reasons to use the same path by different > interfaces? ) Yes, of course. Redundancy. > 2 - Add options to "iscsi multipathing" to specify interface/path. That's doable, but that's a feature request, not a bug.
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
Yaniv, I think it’s the same bug, all over again... iSCSI in oVirt/RHV is just unusable. This sounds a little harsh, but it’s true. All the MPIO baggage that I and other folks have are completely useless and wrong in oVirt/RHV. Take a look at this 2yr old bug: https://bugzilla.redhat.com/show_bug.cgi?id=1474904 Maor tried to help/understand but we can’t solve this. I simply gave up and used NFS instead, or changed the hypervisor where iSCSI with MPIO was mandatory.
I want ti know how to add iscsi storage with command line in rhev-manager? I can't add iscsi storage with GUI.There are no LUNs on target.( I tested on Window server and I found LUN on that target) .I think RHEV-Manager GUI got some problems .
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly
Michal, this is broken software. The same issue is stated here: https://bugzilla.redhat.com/show_bug.cgi?id=1474904 I know that I'm extremely boring complaining about this, but I don't believe that no one in Red Hat really understands what's happening with this issues, for years... It's broken software. With broken or, at best, misleading functionality.
ok, closing. Please reopen if still relevant/you want to work on it.