Bug 1472356

Summary: ovirt-host-deploy enable multipath even when not need
Product: [oVirt] ovirt-host-deploy Reporter: Fabrice Bacchella <fabrice.bacchella>
Component: CoreAssignee: Nir Soffer <nsoffer>
Status: CLOSED WONTFIX QA Contact: Pavel Stehlik <pstehlik>
Severity: low Docs Contact:
Priority: unspecified    
Version: 1.6.6CC: amureini, bugs, didi, mike, nsoffer, oourfali
Target Milestone: ---Flags: sbonazzo: ovirt-4.2-
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-31 12:23:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Fabrice Bacchella 2017-07-18 14:31:23 UTC
I have a datacenter/cluster/host with only local storage attached. It has two disk sda and sdb.
I created sdb after installing ovirt (it's a hardware raid using an HP's SmartArray). oVirt enabled multipath, even if this is not needed, at this is a server without any kind of multipath either being from SAN or iSCSI.
Now multipath keep catching sdb.

I thing a setting is missing in /etc/multipath.conf, it's find_multipaths that should be set to yes, as explained in redhat documentation (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/DM_Multipath/config_file_defaults.html):
find_multipaths:
Defines the mode for setting up multipath devices. If this parameter is set to yes, then multipath will not try to create a device for every path that is not blacklisted. Instead multipath will create a device only if one of three conditions are met:
- There are at least two paths that are not blacklisted with the same WWID.

The default value is no. The default multipath.conf file created by mpathconf, however, will enable find_multipaths as of Red Hat Enterprise Linux 7.


And that exactly what I want, as sdb is not multi-attached.

I also need to remove /etc/multipath/wwids to make multipath forget the discovered drive.

Comment 1 Yedidyah Bar David 2017-07-19 05:13:10 UTC
Allon, I do not think this involves host-deploy, but not sure if it's vdsm or the engine. Can you have a look? Thanks.

Comment 2 Allon Mureinik 2017-07-23 15:17:55 UTC
Nir, do we have a simple way to blacklist this?

Comment 3 Nir Soffer 2017-07-23 15:49:31 UTC
(In reply to Allon Mureinik from comment #2)
> Nir, do we have a simple way to blacklist this?

We don't have a way now, the system is always supporting all storage types.
You can always add a shared iscsi storage to a host using local storage.

If we want to enable multipath only when starting to use block storage, and 
disable when the last block storage was detached, we to redesign the system.

We cannot use find_multipath = yes for this reason. If you have a local device that
should not be used  by multipath, you should blacklist this device in
multipath.conf.

If you blacklist all devices in multipath.conf, you practically disable block
storage support on your setup.

Note that vdsm owns multipath.conf, it you want to modify it you must mark it as
private by adding "# VDSM PRIVATE" at the second line:

$ head -2 multipath.conf
# VDSM REVISION 1.3
# VDSM PRIVATE

Vdsm will never modify multipath.conf after this change. You own this file now.

See multipath.conf(5) for instructions on blacklisting devices.

Comment 4 Allon Mureinik 2017-08-31 12:23:24 UTC
(In reply to Nir Soffer from comment #3)
> (In reply to Allon Mureinik from comment #2)
> > Nir, do we have a simple way to blacklist this?
> 
> We don't have a way now, the system is always supporting all storage types.
> You can always add a shared iscsi storage to a host using local storage.
TBH, this will never be prioritized.
Closing in order to make this visible.
If anyone has a good reason for re-prioritizing this, please comment and explain.

Comment 5 Mike Goodwin 2017-11-08 08:15:02 UTC
I read through this ticket and I'm still unsure why multipath is gobbling up unused block devices? I have a similar scenario to the reporter - a raid10 hardware array on a dell server that appears as a single sd*. I intend on using this with glusterfs with gdeploy, and it's quite confusing why this is the way it is? What's the use case for multipath to own unused block devices by default? 

Is my only recourse blacklisting in /etc/multipath.conf? That was not clear either. 

Multipath is also taking the USB device I have permanently plugged in with the oVirt node installation ISO, which is especially odd...

Should this ticket perhaps be moved to a different product and re-evaluated there?

Comment 6 Nir Soffer 2017-11-08 14:37:21 UTC
(In reply to Mike Goodwin from comment #5)
Yes, you must blacklist local devices in multipath.conf, as described in comment 5.

In RHEL 7.5 multipath will support blacklist by udev property, I think
we will be able to use this capability to automatically blacklist most local
device.