Bug 1821207 - RHHI-V cleanup is removing blacklist.conf file, leading to disrupting the existing blacklist configuration
Summary: RHHI-V cleanup is removing blacklist.conf file, leading to disrupting the exi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.8
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHHI-V 1.8
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1821212
Blocks: RHHI-V-1.8-Engineering-Inflight-BZs
TreeView+ depends on / blocked
 
Reported: 2020-04-06 09:54 UTC by SATHEESARAN
Modified: 2020-08-04 14:52 UTC (History)
1 user (show)

Fixed In Version: gluster-ansible-infra-1.0.4-8.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1821212 (view as bug list)
Environment:
Last Closed: 2020-08-04 14:52:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3314 0 None None None 2020-08-04 14:52:25 UTC

Description SATHEESARAN 2020-04-06 09:54:31 UTC
Description of problem:
-----------------------
When the gluster devices are blacklisted, the /etc/multipath/conf.d/blacklist.conf is created. When cleanup is done, this removes the blacklist.conf file. This will work well during deployment, but it will conflict in one particular case as follows:
1. when deployment blacklists the gluster brick devices
2. Day2 operation ( volume creation, or cluster expansion ), tries to create new vols, and fails, user does a cleanup
This cleanup will also alter the previously blacklisted disks


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-infra-1.0.4-7

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Complete RHHI-V deployment by blacklisting gluster devices
2. On Day2, create a new volume and choose the incorrect disks ( to make sure it fails )
3. Perform cleanup

Actual results:
---------------
/etc/multipath/conf.d/blacklist.conf file is removed as part of cleanup also altering the previously blacklisted devices


Expected results:
-----------------
Cleanup of device should not remove the entire file /etc/multipath/conf.d/blacklist.conf, instead it should remove the entry corresponding to the device that is cleaned up

Comment 2 SATHEESARAN 2020-04-18 06:36:54 UTC
Tested with gluster-ansible-infra-1.0.4-8.el8rhgs

1. After the failed setup, tried to cleanup using the cleanup playbook ( /etc/ansible/roles/gluster-ansible/hc-ansible-deployment/tasks/luks_device_cleanup.yml )
2. Cleanup doesn't remove the entire /etc/multipath/conf.d/blacklist.conf,instead removes only the entries in blacklist.conf

Comment 4 errata-xmlrpc 2020-08-04 14:52:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314


Note You need to log in before you can comment on or make changes to this bug.