Bug 1821212 - RHHI-V cleanup is removing blacklist.conf file, leading to disrupting the existing blacklist configuration
Summary: RHHI-V cleanup is removing blacklist.conf file, leading to disrupting the exi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 2
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1821207
TreeView+ depends on / blocked
 
Reported: 2020-04-06 10:08 UTC by SATHEESARAN
Modified: 2020-06-16 05:57 UTC (History)
6 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-8.el8rhgs,gluster-ansible-roles-1.0.5-8.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of: 1821207
Environment:
rhhiv, rhel8
Last Closed: 2020-06-16 05:57:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github gluster gluster-ansible pull 101 0 None closed Remove specified device from blacklist 2020-06-03 05:29:55 UTC
Red Hat Product Errata RHEA-2020:2575 0 None None None 2020-06-16 05:57:47 UTC

Description SATHEESARAN 2020-04-06 10:08:37 UTC
Description of problem:
-----------------------
When the gluster devices are blacklisted, the /etc/multipath/conf.d/blacklist.conf is created. When cleanup is done, this removes the blacklist.conf file. This will work well during deployment, but it will conflict in one particular case as follows:
1. when deployment blacklists the gluster brick devices
2. Day2 operation ( volume creation, or cluster expansion ), tries to create new vols, and fails, user does a cleanup
This cleanup will also alter the previously blacklisted disks


Version-Release number of selected component (if applicable):
--------------------------------------------------------------
gluster-ansible-infra-1.0.4-7

How reproducible:
------------------
Always

Steps to Reproduce:
-------------------
1. Complete RHHI-V deployment by blacklisting gluster devices
2. On Day2, create a new volume and choose the incorrect disks ( to make sure it fails )
3. Perform cleanup

Actual results:
---------------
/etc/multipath/conf.d/blacklist.conf file is removed as part of cleanup also altering the previously blacklisted devices


Expected results:
-----------------
Cleanup of device should not remove the entire file /etc/multipath/conf.d/blacklist.conf, instead it should remove the entry corresponding to the device that is cleaned up

Comment 1 SATHEESARAN 2020-04-07 04:42:03 UTC

Proposing this bug as BLOCKER as its required for RHHI-V 1.8 RFE

Comment 4 SATHEESARAN 2020-04-18 06:36:03 UTC
Tested with gluster-ansible-infra-1.0.4-8.el8rhgs

1. After the failed setup, tried to cleanup using the cleanup playbook ( /etc/ansible/roles/gluster-ansible/hc-ansible-deployment/tasks/luks_device_cleanup.yml )
2. Cleanup doesn't remove the entire /etc/multipath/conf.d/blacklist.conf,instead removes only the entries in blacklist.conf

Comment 6 errata-xmlrpc 2020-06-16 05:57:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:2575


Note You need to log in before you can comment on or make changes to this bug.