Bug 1898005 - Deployment fails skipping clearing LVM filter task
Summary: Deployment fails skipping clearing LVM filter task
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-ansible
Version: rhhiv-1.8
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.5.z Async Update
Assignee: Gobinda Das
QA Contact: milind
URL:
Whiteboard:
Depends On:
Blocks: 1898002
TreeView+ depends on / blocked
 
Reported: 2020-11-16 05:29 UTC by SATHEESARAN
Modified: 2020-11-24 12:38 UTC (History)
8 users (show)

Fixed In Version: gluster-ansible-infra-1.0.4-17.el8rhgs
Doc Type: No Doc Update
Doc Text:
Clone Of: 1898002
Environment:
Last Closed: 2020-11-24 12:37:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5220 0 None None None 2020-11-24 12:38:06 UTC

Description SATHEESARAN 2020-11-16 05:29:20 UTC
Description of problem:
Deployment fails as the LVM filter is not cleared before the LV creation.



Version-Release number of selected component (if applicable):

gluster-ansible-infra-1.0.4-16.el8rhgs.noarch

How reproducible:


Steps to Reproduce:
1. login to UI and do gluster deployment 


Actual results:
Failed while gluster deployment 

Expected results:
gluster deployment should be successful 

Additional info:

[root@rhsqa-grafton7-nic2 ~]# rpm -qa | grep -i ansible 
gluster-ansible-maintenance-1.0.1-11.el8rhgs.noarch
gluster-ansible-cluster-1.0-3.el8rhgs.noarch
gluster-ansible-features-1.0.5-10.el8rhgs.noarch
gluster-ansible-roles-1.0.5-22.el8rhgs.noarch
gluster-ansible-infra-1.0.4-16.el8rhgs.noarch
ovirt-ansible-collection-1.2.1-1.el8ev.noarch
ansible-2.9.14-1.el8ae.noarch
gluster-ansible-repositories-1.0.1-4.el8rhgs.noarch
[root@rhsqa-grafton7-nic2 ~]# rpm -qa | grep -i gluster
gluster-ansible-maintenance-1.0.1-11.el8rhgs.noarch
glusterfs-6.0-37.1.el8rhgs.x86_64
vdsm-gluster-4.40.35.1-1.el8ev.x86_64
gluster-ansible-cluster-1.0-3.el8rhgs.noarch
glusterfs-cli-6.0-37.1.el8rhgs.x86_64
glusterfs-client-xlators-6.0-37.1.el8rhgs.x86_64
glusterfs-server-6.0-37.1.el8rhgs.x86_64
gluster-ansible-features-1.0.5-10.el8rhgs.noarch
gluster-ansible-roles-1.0.5-22.el8rhgs.noarch
glusterfs-geo-replication-6.0-37.1.el8rhgs.x86_64
qemu-kvm-block-gluster-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64
glusterfs-libs-6.0-37.1.el8rhgs.x86_64
glusterfs-api-6.0-37.1.el8rhgs.x86_64
python3-gluster-6.0-37.1.el8rhgs.x86_64
glusterfs-events-6.0-37.1.el8rhgs.x86_64
gluster-ansible-infra-1.0.4-16.el8rhgs.noarch
libvirt-daemon-driver-storage-gluster-6.6.0-7.module+el8.3.0+8424+5ea525c5.x86_64
glusterfs-fuse-6.0-37.1.el8rhgs.x86_64
glusterfs-rdma-6.0-37.1.el8rhgs.x86_64
gluster-ansible-repositories-1.0.1-4.el8rhgs.noarch
[root@rhsqa-grafton7-nic2 ~]# imgbase w
You are on rhvh-4.4.3.1-0.20201112.0+1
[root@rhsqa-grafton7-nic2 ~]#

--- Additional comment from RHEL Program Management on 2020-11-16 05:11:54 UTC ---

This bug is automatically being proposed for a RHHI-V 1.8.z update at Red Hat Hyperconverged Infrastructure for Virtualization product, by setting the release flag 'rhiv‑1.8.z' to '?'.

--- Additional comment from SATHEESARAN on 2020-11-16 05:24:19 UTC ---

This issue is seen with gluster-ansible-infra-1.0-4-16 which was included in RHVH 4.4.3-1 async.
This is the regression as the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1895905 included
few more changes in the different task of clearing LVM filter.

Included changes in the file - /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml ( along with the fix for BZ 1895905 ):

<snip>
- name: Check if vdsm-python package is installed or not
  command: rpm -q vdsm-python
  register: rpm_check
  ignore_errors: yes
  
- name: Exclude LVM Filter rules
  import_tasks: lvm_exclude_filter.yml
  when: rpm_check.rc == 0 and gluster_infra_lvm is defined        <--- though vdsm-python package is available, gluster_infra_lvm is undefined 
</snip>

This changes makes this task to be skipped.

<snip>
TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ******
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_exclude_filter.yml:2
skipping: [rhsqa-grafton7.lab.eng.blr.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [rhsqa-grafton8.lab.eng.blr.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [rhsqa-grafton9.lab.eng.blr.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"}
</snip>

Comment 1 Gobinda Das 2020-11-16 06:36:53 UTC
Hi Milind,
 Is this a end-to-end deployment or cockpit based deployment?

Comment 2 milind 2020-11-16 07:38:03 UTC
(In reply to Gobinda Das from comment #1)
> Hi Milind,
>  Is this a end-to-end deployment or cockpit based deployment?

Hi Gobinda
  Its  cockpit based deployment

Comment 4 milind 2020-11-16 16:19:25 UTC
As the gluster deployment is successful with 
[node.example.com]# rpm -qa| grep -i  gluster-ansible-infra
gluster-ansible-infra-1.0.4-17.el8rhgs.noarch

[node.example.com]# imgbase w
You are on rhvh-4.4.3.1-0.20201112.0+1

moving this bug as verified

Comment 8 errata-xmlrpc 2020-11-24 12:37:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (gluster-ansible bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5220


Note You need to log in before you can comment on or make changes to this bug.