Description of problem:
After a reboot on a host which is provisioned with a RHV-H, the multipath devices are shown up. This is seen only on RHV-H host.
Version-Release number of selected component (if applicable):
Most of the time
Steps to Reproduce:
1. Install RHV-H
2. Reboot the system
Seeing multipath devices
Shouldn't see the multipath device
*** Bug 1573554 has been marked as a duplicate of this bug. ***
Id this RHVH specific or shows on RHEL as well?
This was not seen in RHEL only in RHVH
Ryan, is this related to the vdsm multipath bug 1016535 or specific to RHV-H?
This is definitely related to the vdsm bug.
We configure vdsm at first boot on RHVH. If "vdsm-tool configure --force" on RHEL results in the same behavior, it's the vdsm bug. RHVH does not otherwise modify multipath
When the RHVH is installed, and then redhat-virtualization-host-image-upate rpms is installed and the node is rebooted, the multipath entries shows up for all the disks.
This is kind of nasty experience. Raising the severity to Urgent
Freddy, this bug is dependent on bug 1380272. Is there a fix possible in vdsm for 4.2.5?
Workaround for this issue
1. Blacklist all the devices in /etc/multipath.conf
Add the following to the /etc/multipath.con
2. Reboot the RHVH hosts
One problem that I observed is that mpath names on the boot disks are not going away.
In an RHHI environment (no SAN / iSCSI storage) it should be safe to simply run "ansible rhhi-hosts -a 'multipath -F'" and then relaunch the wizard. Even better if the installation playbooks did this automatically for our customers.
With RHHI-V 1.8 'blacklist gluster devices' option is introduced with deployment
and this helps to blacklist the devices used for gluster bricks.
So these devices should not have multipath names post reboot
Should we close this bug ?