Description of problem: ----------------------- The current way the blacklisting gluster devices works is such a way it configures vdsm with force option, to generate /etc/multipath.conf, so that that multipathd service could be started. This is true for fresh installation , as that doesn't have the /etc/multipath.conf file. But considering for day2 operation, when creating a new volume or expanding cluster, it will once again configure vdsm with force option, which will override the existing vdsm configuration. So the solution is to configure vdsm with force, only and only if /etc/multipath.conf file is *not* available Version-Release number of selected component (if applicable): ------------------------------------------------------------- gluster-ansible-infra-1.0.4-7 How reproducible: ----------------- Always Steps to Reproduce: -------------------- 1. Complete RHHI-V deployment by blacklisting the gluster devices 2. From Day2, start with volume creation or cluster expansion operation Actual results: ----------------- vdsm is configured once again losing the old value Expected results: ----------------- As /etc/multipath.conf file already exists, configuring vdsm with force option is not required Additional info: ---------------- Code logic should be: if (the file /etc/multipath.conf **not** present): vdsm-tool configure --force Blacklist_devices() else: Blacklist_devices()
Proposing this bug as BLOCKER as its required for RHHI-V 1.8 RFE
Verified with gluster-ansible-infra-1.0.4-8.el8rhgs 1. When /etc/multipath.conf is already available, the task to configure vdsm ( that runs #vdsm-tool configure --force ) is skipped. <snip> TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] ********************************************************************************************************************** skipping: [10.70.35.151] skipping: [10.70.35.96] skipping: [10.70.35.136] </snip>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2575