Description of problem: During a clean install of RHV-H 4.3 and RHHI 1.6 if server are booted before running the ansible playbook multipath will lock disks and you will be unable to create pvs. Version-Release number of selected component (if applicable): glusterfs-server-3.12.2-47.2.el7rhgs.x86_64 rhv-4.3.5 How reproducible: All the time Steps to Reproduce: 1. Install RHVH-4.3 to server. 2. Setup network 3. reboot 4. Unable to provision gluster to /dev/sdb as disk are locked. Actual results: Disk locked Expected results: Disk not locked and gluster installable. Additional info: A workaround to the problem. Update multipath.conf with blacklist for /dev/sd* Restart multipathd Create new kernel with dracut –f Reboot system Installation now works. It should be possible to reboot the servers like in 4.2 without having to update multipath and more to get the installation going.
Yuval, is there anything that can be done to prevent multipath entries from being added to local disks on RHV-H node
RHVH uses the standard RHEL installation driven by anaconda, I would think that the user can just deselect the disks, no ? We can theoretically add this in `imgbase layout --init` but I'm not sure it's right, Nir what do you think ?
(In reply to Yuval Turgeman from comment #3) To avoid multipath on local disk you must configure multipath to blacklist the disk you want to use for RHHI. WARNING: never edit /etc/multipath.conf! this file is managed by vdsm and should not be edited by users or by products layered on top of vdsm (e.g. Node, RHHI). The best way to add a blacklist is to add a drop-in configuration file in /etc/multipath.conf.d/rhhi.conf with this contents: blacklist { wwid "device-serial" } To get the device serial you use udevadm info: $ udevadm info /sys/block/sda | egrep "ID_SERIAL=|WWN=" E: ID_SERIAL=Generic-_SD_MMC_20120501030900000-0:0 In this case this blacklist should work: blacklist { wwid "Generic-_SD_MMC_20120501030900000-0:0" } You need to regenerate initrd to have this configuration used during boot. Ben, please correct me if needed.
(In reply to Nir Soffer from comment #4) > (In reply to Yuval Turgeman from comment #3) > To avoid multipath on local disk you must configure multipath to blacklist > the disk you want to use for RHHI. > > Ben, please correct me if needed. This is correct. Blacklisting by wwid will always work. In some situations, you may be able to come up with some other way, say by device vendor/product, to separate which devices you want to be multipathed and which you don't. This can save you from having to add a blacklist entry for device you don't want multipathed, if there are a large number. But all multlipathed devices must have a WWID, so you will always be able to use that for blacklisting.
Sahina, do you believe this is something you should be fixing in code or we should recommend this as a workaround in KCS or maybe even in the official docs? I am unsure about step 2 in the reproducer - setup network; that's why I fail to judge how impactful this BZ on other customers.
This can be addressed via cockpit and gluster ansible roles by asking user as input from cockpit whether to use multipath or not? if user does not want multipath we can flush by executing "multipath -F"
(In reply to Marina Kalinin from comment #6) > Sahina, do you believe this is something you should be fixing in code or we > should recommend this as a workaround in KCS or maybe even in the official > docs? > > I am unsure about step 2 in the reproducer - setup network; that's why I > fail to judge how impactful this BZ on other customers. I think the issue is seen even without step 2.
from myllynen: But the actual issue we see is that: 1) We are using fully standardized ("set-in-stone") HW configurations for edge/isolated clusters which will only ever have local disks. So there is / never will be need for multipath. 2) By default on servers booted with RHV 4.3.6 Host image we see multipath -ll listing the local /dev/sdb we are trying to use for Gluster. 3) If we do not do anything with multipath configuration then RHHI-V hyperconverged Gluster/HE deployment playbooks fail to create VG for Gluster on the designated /dev/sdb disk with complaint that that "Device /dev/sdb is excluded by a filter" (from the lvg task in the Gluster/Ansible playbooks). 4) When we now blacklist all local devices in multipath configuration (blacklist { wwid "*" }), multipath -ll shows nothing and Gluster/HE deployment works as expected. Please let us know if you'd like more details, we can reproduce this at will.
I think user needs to blacklist the devices in /etc/multipath.conf file manually as per comment#4.Eventhough we will provide option from cockpit it does not make sense unless deployment starts from cockpit.If machine reboots before deployment then will have same issue. For cockpit related changes will track with separate bug, we can close this if manual sulution is OK.
The multipath issue will be addressed in https://bugzilla.redhat.com/show_bug.cgi?id=1807808
works for me, however let's confirm this with Marko
RHHI-V 1.8 deployment from webconsole, provides an option to blacklist gluster devices under 'bricks' tab This option is enabled by default, and when deployed the corresponding gluster brick devices, even if that had any mpath entries then the same will be flushed and those devices are blacklisted automatically. Verified the above behavior with gluster-ansible-infra-1.0.4-7.el8rhgs with RHVH 4.4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3314