Bug 1830910

Summary: RHHI-V deployment fails when deployment attempted post reboot of RHVH node
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: SATHEESARAN <sasundar>
Component: gluster-ansibleAssignee: Gobinda Das <godas>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.5CC: godas, pprakash, puebele, rhs-bugs, sabose, sasundar
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.5.z Batch Update 2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: gluster-ansible-infra-1.0.4-10.el8rhgs Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: 1830909 Environment:
Last Closed: 2020-06-16 05:57:32 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1830909    

Description SATHEESARAN 2020-05-04 10:11:58 UTC
Description of problem:
-----------------------
When RHVH node is rebooted, the boot/OS disk gets the multipath name.
During RHHI-V deployment from cockpit, the ansible task to blacklist the gluster brick devices, tries to perform 'multipath -F', which also tries to flush out the multipath names on the boot/OS disk, which eventually fails as the root filesystem is already active on boot disk

The quick solution here is not to perform, 'multipath -F' which tries to flush multipath names on all the disks. Reloading multipath ideally helps to activate the blacklisting of disk, and flushing the multipath disk names not required


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-infra-1.0.4-8.el8rhgs

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Install RHVH node, reboot the host post successful installation
2. Reboot the RHVH node, after the node comes up post successful first boot
3. Start RHHI-V deployment from web console (cockpit) with option 'blacklisting gluster devices' option enabled

Actual results:
----------------
RHHI-V deployment fails as the root/boot/OS disk has multipath name and that can't be flushed

Expected results:
-----------------
RHHI-V deployment should proceed though the root/OS disk has got multipath names

Additional info:
-----------------
<snip>
TASK [gluster.infra/roles/backend_setup : Flush all empty multipath devices] ***
fatal: [10.70.35.96]: FAILED! => {"changed": true, "cmd": "multipath -F", "delta": "0:00:00.055945", "end": "2020-05-04 08:11:26.787117", "msg": "non-zero return code", "rc": 1, "start": "2020-05-04 08:11:26.731172", "stderr": "May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use", "stderr_lines": ["May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use"], "stdout": "", "stdout_lines": []}
changed: [10.70.35.151]
changed: [10.70.35.136]
</snip>

In this case 10.70.35.96 node is rebooted and it had multipath name created for /dev/sda

Comment 1 SATHEESARAN 2020-05-04 10:16:28 UTC
Raising a this issue as blocker as it leads to bad user experience, in the case if the node is rebooted before deployment

Comment 6 SATHEESARAN 2020-06-06 11:37:29 UTC
Verified with gluster-ansible-infra-1.0.4-10.el8rhgs

There is no task now that tries to flush the multipath rules, when attempting to blacklist the gluster brick devices

Comment 8 errata-xmlrpc 2020-06-16 05:57:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2575