Description of problem: ------------------------ During RHHI-V deployment, all the devices are blacklisted. This decision should be made by the admin, as the setup may or may not have multipath disks. Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHHI-V 1.6 gluster-ansible-roles-1.0.4-4 How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Complete gluster deployment procedure of RHHI-V deployment Actual results: --------------- 1. /etc/multipath.conf file is generated with 'vdsm-tool configure --force' 2. /etc/multipath.conf file is edited to blacklist all the devices Expected results: ----------------- Do not perform any of the multipath configuration to blacklist the disks Additional info:
The dependent bug is already ON_QA, moving this bug to ON_QA
Tested with RHVH 4.3.5 + RHEL 7.7 + RHGS 3.4.4 ( interim build - glusterfs-6.0-6 ) with ansible 2.8.1-1 with: gluster-ansible-features-1.0.5-2.el7rhgs.noarch gluster-ansible-roles-1.0.5-2.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch Gluster configuration is done with the following modifications: 1. Glusterfs systemd slice configuration removed in favor of configuring the same while adding the host to RHVM managed cluster 2. Configuration of multipath removed: no blacklisting of devices 3. Package installation step is removed, as RHVH has all the packages available
doc_text is required for this bug, as this new changes differ from the previously known behavior "Multipath is no longer configured to blacklist all the local devices. Its left to the admin to provide multipath WWID during cockpit deployment, in case mpath is configured in the setup" @Sachi, could you provide doc_text for this bug for the doc_team to add it to 'Release Notes'
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2963