Bug 1830909 - RHHI-V deployment fails when deployment attempted post reboot of RHVH node
Summary: RHHI-V deployment fails when deployment attempted post reboot of RHVH node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHHI-V 1.8
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1830910
Blocks: RHHI-V-1.8-Engineering-Inflight-BZs
TreeView+ depends on / blocked
 
Reported: 2020-05-04 10:09 UTC by SATHEESARAN
Modified: 2020-08-04 14:52 UTC (History)
2 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-10.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1830910 (view as bug list)
Environment:
Last Closed: 2020-08-04 14:52:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3314 0 None None None 2020-08-04 14:52:26 UTC

Description SATHEESARAN 2020-05-04 10:09:39 UTC
Description of problem:
-----------------------
When RHVH node is rebooted, the boot/OS disk gets the multipath name.
During RHHI-V deployment from cockpit, the ansible task to blacklist the gluster brick devices, tries to perform 'multipath -F', which also tries to flush out the multipath names on the boot/OS disk, which eventually fails as the root filesystem is already active on boot disk

The quick solution here is not to perform, 'multipath -F' which tries to flush multipath names on all the disks. Reloading multipath ideally helps to activate the blacklisting of disk, and flushing the multipath disk names not required


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-infra-1.0.4-8.el8rhgs

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Install RHVH node, reboot the host post successful installation
2. Reboot the RHVH node, after the node comes up post successful first boot
3. Start RHHI-V deployment from web console (cockpit) with option 'blacklisting gluster devices' option enabled

Actual results:
----------------
RHHI-V deployment fails as the root/boot/OS disk has multipath name and that can't be flushed

Expected results:
-----------------
RHHI-V deployment should proceed though the root/OS disk has got multipath names

Additional info:
-----------------
<snip>
TASK [gluster.infra/roles/backend_setup : Flush all empty multipath devices] ***
fatal: [10.70.35.96]: FAILED! => {"changed": true, "cmd": "multipath -F", "delta": "0:00:00.055945", "end": "2020-05-04 08:11:26.787117", "msg": "non-zero return code", "rc": 1, "start": "2020-05-04 08:11:26.731172", "stderr": "May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use", "stderr_lines": ["May 04 08:11:26 | 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0p2: map in use"], "stdout": "", "stdout_lines": []}
changed: [10.70.35.151]
changed: [10.70.35.136]
</snip>

In this case 10.70.35.96 node is rebooted and it had multipath name created for /dev/sda

Comment 2 SATHEESARAN 2020-06-06 11:38:22 UTC
Verified with gluster-ansible-infra-1.0.4-10.el8rhgs

There is no task now that tries to flush the multipath rules, when attempting to blacklist the gluster brick devices

Comment 4 errata-xmlrpc 2020-08-04 14:52:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314


Note You need to log in before you can comment on or make changes to this bug.