Bug 1821118 - vdsm configured with force option, even when multipath.conf file existing
Summary: vdsm configured with force option, even when multipath.conf file existing
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhhiv-1.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1821117
TreeView+ depends on / blocked
 
Reported: 2020-04-06 05:18 UTC by SATHEESARAN
Modified: 2020-06-16 05:57 UTC (History)
6 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-8.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of: 1821117
Environment:
rhhiv, rhel8
Last Closed: 2020-06-16 05:57:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github gluster gluster-ansible-infra pull 91 0 None closed Usecase for luks device with blacklist 2020-06-03 05:29:56 UTC
Red Hat Product Errata RHEA-2020:2575 0 None None None 2020-06-16 05:57:47 UTC

Description SATHEESARAN 2020-04-06 05:18:08 UTC
Description of problem:
-----------------------
The current way the blacklisting gluster devices works is such a way it configures vdsm with force option, to generate /etc/multipath.conf, so that that multipathd service could be started. This is true for fresh installation , as that doesn't have the /etc/multipath.conf file.

But considering for day2 operation, when creating a new volume or expanding cluster, it will once again configure vdsm with force option, which will override the existing vdsm configuration.

So the solution is to configure vdsm with force, only and only if /etc/multipath.conf file is *not* available

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gluster-ansible-infra-1.0.4-7

How reproducible:
-----------------
Always

Steps to Reproduce:
--------------------
1. Complete RHHI-V deployment by blacklisting the gluster devices
2. From Day2, start with volume creation or cluster expansion operation

Actual results:
-----------------
vdsm is configured once again losing the old value

Expected results:
-----------------
As /etc/multipath.conf file already exists, configuring vdsm with force option is not required

Additional info:
----------------
Code logic should be:

if (the file /etc/multipath.conf **not** present):
    vdsm-tool configure --force
    Blacklist_devices()
else:
    Blacklist_devices()

Comment 1 SATHEESARAN 2020-04-07 04:40:42 UTC
Proposing this bug as BLOCKER as its required for RHHI-V 1.8 RFE

Comment 4 SATHEESARAN 2020-04-18 06:51:06 UTC
Verified with gluster-ansible-infra-1.0.4-8.el8rhgs
1. When /etc/multipath.conf is already available, the task to configure vdsm ( that runs #vdsm-tool configure --force )
is skipped.

<snip>
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] **********************************************************************************************************************
skipping: [10.70.35.151]
skipping: [10.70.35.96]
skipping: [10.70.35.136]
</snip>

Comment 6 errata-xmlrpc 2020-06-16 05:57:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2575


Note You need to log in before you can comment on or make changes to this bug.