Red Hat Bugzilla – Bug 1472356
ovirt-host-deploy enable multipath even when not need
Last modified: 2017-11-08 11:03:05 EST
I have a datacenter/cluster/host with only local storage attached. It has two disk sda and sdb.
I created sdb after installing ovirt (it's a hardware raid using an HP's SmartArray). oVirt enabled multipath, even if this is not needed, at this is a server without any kind of multipath either being from SAN or iSCSI.
Now multipath keep catching sdb.
I thing a setting is missing in /etc/multipath.conf, it's find_multipaths that should be set to yes, as explained in redhat documentation (https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/DM_Multipath/config_file_defaults.html):
Defines the mode for setting up multipath devices. If this parameter is set to yes, then multipath will not try to create a device for every path that is not blacklisted. Instead multipath will create a device only if one of three conditions are met:
- There are at least two paths that are not blacklisted with the same WWID.
The default value is no. The default multipath.conf file created by mpathconf, however, will enable find_multipaths as of Red Hat Enterprise Linux 7.
And that exactly what I want, as sdb is not multi-attached.
I also need to remove /etc/multipath/wwids to make multipath forget the discovered drive.
Allon, I do not think this involves host-deploy, but not sure if it's vdsm or the engine. Can you have a look? Thanks.
Nir, do we have a simple way to blacklist this?
(In reply to Allon Mureinik from comment #2)
> Nir, do we have a simple way to blacklist this?
We don't have a way now, the system is always supporting all storage types.
You can always add a shared iscsi storage to a host using local storage.
If we want to enable multipath only when starting to use block storage, and
disable when the last block storage was detached, we to redesign the system.
We cannot use find_multipath = yes for this reason. If you have a local device that
should not be used by multipath, you should blacklist this device in
If you blacklist all devices in multipath.conf, you practically disable block
storage support on your setup.
Note that vdsm owns multipath.conf, it you want to modify it you must mark it as
private by adding "# VDSM PRIVATE" at the second line:
$ head -2 multipath.conf
# VDSM REVISION 1.3
# VDSM PRIVATE
Vdsm will never modify multipath.conf after this change. You own this file now.
See multipath.conf(5) for instructions on blacklisting devices.
(In reply to Nir Soffer from comment #3)
> (In reply to Allon Mureinik from comment #2)
> > Nir, do we have a simple way to blacklist this?
> We don't have a way now, the system is always supporting all storage types.
> You can always add a shared iscsi storage to a host using local storage.
TBH, this will never be prioritized.
Closing in order to make this visible.
If anyone has a good reason for re-prioritizing this, please comment and explain.
I read through this ticket and I'm still unsure why multipath is gobbling up unused block devices? I have a similar scenario to the reporter - a raid10 hardware array on a dell server that appears as a single sd*. I intend on using this with glusterfs with gdeploy, and it's quite confusing why this is the way it is? What's the use case for multipath to own unused block devices by default?
Is my only recourse blacklisting in /etc/multipath.conf? That was not clear either.
Multipath is also taking the USB device I have permanently plugged in with the oVirt node installation ISO, which is especially odd...
Should this ticket perhaps be moved to a different product and re-evaluated there?
(In reply to Mike Goodwin from comment #5)
Yes, you must blacklist local devices in multipath.conf, as described in comment 5.
In RHEL 7.5 multipath will support blacklist by udev property, I think
we will be able to use this capability to automatically blacklist most local