Bug 1830910 - RHHI-V deployment fails when deployment attempted post reboot of RHVH node
Summary: RHHI-V deployment fails when deployment attempted post reboot of RHVH node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1830909
TreeView+ depends on / blocked
 
Reported: 2020-05-04 10:11 UTC by SATHEESARAN
Modified: 2020-06-16 05:57 UTC (History)
6 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-10.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of: 1830909
Environment:
Last Closed: 2020-06-16 05:57:32 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github gluster gluster-ansible-infra pull 96 0 None closed Removed Flush all empty multipath devices 2020-10-23 15:08:13 UTC
Red Hat Product Errata RHEA-2020:2575 0 None None None 2020-06-16 05:57:50 UTC

Description SATHEESARAN 2020-05-04 10:11:58 UTC
Description of problem:
-----------------------
When RHVH node is rebooted, the boot/OS disk gets the multipath name.
During RHHI-V deployment from cockpit, the ansible task to blacklist the gluster brick devices, tries to perform 'multipath -F', which also tries to flush out the multipath names on the boot/OS disk, which eventually fails as the root filesystem is already active on boot disk

The quick solution here is not to perform, 'multipath -F' which tries to flush multipath names on all the disks. Reloading multipath ideally helps to activate the blacklisting of disk, and flushing the multipath disk names not required


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-infra-1.0.4-8.el8rhgs

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Install RHVH node, reboot the host post successful installation
2. Reboot the RHVH node, after the node comes up post successful first boot
3. Start RHHI-V deployment from web console (cockpit) with option 'blacklisting gluster devices' option enabled

Actual results:
----------------
RHHI-V deployment fails as the root/boot/OS disk has multipath name and that can't be flushed

Expected results:
-----------------
RHHI-V deployment should proceed though the root/OS disk has got multipath names

Additional info:
-----------------
<snip>
TASK [gluster.infra/roles/backend_setup : Flush all empty multipath devices] ***
fatal: [10.70.35.96]: FAILED! => {"changed": true, "cmd": "multipath -F", "delta": "0:00:00.055945", "end": "2020-05-04 08:11:26.787117", "msg": "non-zero return code", "rc": 1, "start": "2020-05-04 08:11:26.731172", "stderr": "May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use", "stderr_lines": ["May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use"], "stdout": "", "stdout_lines": []}
changed: [10.70.35.151]
changed: [10.70.35.136]
</snip>

In this case 10.70.35.96 node is rebooted and it had multipath name created for /dev/sda

Comment 1 SATHEESARAN 2020-05-04 10:16:28 UTC
Raising a this issue as blocker as it leads to bad user experience, in the case if the node is rebooted before deployment

Comment 6 SATHEESARAN 2020-06-06 11:37:29 UTC
Verified with gluster-ansible-infra-1.0.4-10.el8rhgs

There is no task now that tries to flush the multipath rules, when attempting to blacklist the gluster brick devices

Comment 8 errata-xmlrpc 2020-06-16 05:57:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2575


Note You need to log in before you can comment on or make changes to this bug.