Description of problem: ----------------------- When the gluster devices are blacklisted, the /etc/multipath/conf.d/blacklist.conf is created. When cleanup is done, this removes the blacklist.conf file. This will work well during deployment, but it will conflict in one particular case as follows: 1. when deployment blacklists the gluster brick devices 2. Day2 operation ( volume creation, or cluster expansion ), tries to create new vols, and fails, user does a cleanup This cleanup will also alter the previously blacklisted disks Version-Release number of selected component (if applicable): -------------------------------------------------------------- gluster-ansible-infra-1.0.4-7 How reproducible: ------------------ Always Steps to Reproduce: ------------------- 1. Complete RHHI-V deployment by blacklisting gluster devices 2. On Day2, create a new volume and choose the incorrect disks ( to make sure it fails ) 3. Perform cleanup Actual results: --------------- /etc/multipath/conf.d/blacklist.conf file is removed as part of cleanup also altering the previously blacklisted devices Expected results: ----------------- Cleanup of device should not remove the entire file /etc/multipath/conf.d/blacklist.conf, instead it should remove the entry corresponding to the device that is cleaned up
Proposing this bug as BLOCKER as its required for RHHI-V 1.8 RFE
Tested with gluster-ansible-infra-1.0.4-8.el8rhgs 1. After the failed setup, tried to cleanup using the cleanup playbook ( /etc/ansible/roles/gluster-ansible/hc-ansible-deployment/tasks/luks_device_cleanup.yml ) 2. Cleanup doesn't remove the entire /etc/multipath/conf.d/blacklist.conf,instead removes only the entries in blacklist.conf
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:2575