Description of problem: ----------------------- When RHVH node is rebooted, the boot/OS disk gets the multipath name. During RHHI-V deployment from cockpit, the ansible task to blacklist the gluster brick devices, tries to perform 'multipath -F', which also tries to flush out the multipath names on the boot/OS disk, which eventually fails as the root filesystem is already active on boot disk The quick solution here is not to perform, 'multipath -F' which tries to flush multipath names on all the disks. Reloading multipath ideally helps to activate the blacklisting of disk, and flushing the multipath disk names not required Version-Release number of selected component (if applicable): -------------------------------------------------------------- gluster-ansible-infra-1.0.4-8.el8rhgs How reproducible: ------------------ Always Steps to Reproduce: ------------------- 1. Install RHVH node, reboot the host post successful installation 2. Reboot the RHVH node, after the node comes up post successful first boot 3. Start RHHI-V deployment from web console (cockpit) with option 'blacklisting gluster devices' option enabled Actual results: ---------------- RHHI-V deployment fails as the root/boot/OS disk has multipath name and that can't be flushed Expected results: ----------------- RHHI-V deployment should proceed though the root/OS disk has got multipath names Additional info: ----------------- <snip> TASK [gluster.infra/roles/backend_setup : Flush all empty multipath devices] *** fatal: [10.70.35.96]: FAILED! => {"changed": true, "cmd": "multipath -F", "delta": "0:00:00.055945", "end": "2020-05-04 08:11:26.787117", "msg": "non-zero return code", "rc": 1, "start": "2020-05-04 08:11:26.731172", "stderr": "May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use", "stderr_lines": ["May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use"], "stdout": "", "stdout_lines": []} changed: [10.70.35.151] changed: [10.70.35.136] </snip> In this case 10.70.35.96 node is rebooted and it had multipath name created for /dev/sda
Raising a this issue as blocker as it leads to bad user experience, in the case if the node is rebooted before deployment
Verified with gluster-ansible-infra-1.0.4-10.el8rhgs There is no task now that tries to flush the multipath rules, when attempting to blacklist the gluster brick devices
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2575