Description of problem: Deployment fails as the LVM filter is not cleared before the LV creation. Version-Release number of selected component (if applicable): gluster-ansible-infra-1.0.4-16.el8rhgs.noarch How reproducible: Steps to Reproduce: 1. login to UI and do gluster deployment Actual results: Failed while gluster deployment Expected results: gluster deployment should be successful Additional info: [root@rhsqa-grafton7-nic2 ~]# rpm -qa | grep -i ansible gluster-ansible-maintenance-1.0.1-11.el8rhgs.noarch gluster-ansible-cluster-1.0-3.el8rhgs.noarch gluster-ansible-features-1.0.5-10.el8rhgs.noarch gluster-ansible-roles-1.0.5-22.el8rhgs.noarch gluster-ansible-infra-1.0.4-16.el8rhgs.noarch ovirt-ansible-collection-1.2.1-1.el8ev.noarch ansible-2.9.14-1.el8ae.noarch gluster-ansible-repositories-1.0.1-4.el8rhgs.noarch [root@rhsqa-grafton7-nic2 ~]# rpm -qa | grep -i gluster gluster-ansible-maintenance-1.0.1-11.el8rhgs.noarch glusterfs-6.0-37.1.el8rhgs.x86_64 vdsm-gluster-4.40.35.1-1.el8ev.x86_64 gluster-ansible-cluster-1.0-3.el8rhgs.noarch glusterfs-cli-6.0-37.1.el8rhgs.x86_64 glusterfs-client-xlators-6.0-37.1.el8rhgs.x86_64 glusterfs-server-6.0-37.1.el8rhgs.x86_64 gluster-ansible-features-1.0.5-10.el8rhgs.noarch gluster-ansible-roles-1.0.5-22.el8rhgs.noarch glusterfs-geo-replication-6.0-37.1.el8rhgs.x86_64 qemu-kvm-block-gluster-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64 glusterfs-libs-6.0-37.1.el8rhgs.x86_64 glusterfs-api-6.0-37.1.el8rhgs.x86_64 python3-gluster-6.0-37.1.el8rhgs.x86_64 glusterfs-events-6.0-37.1.el8rhgs.x86_64 gluster-ansible-infra-1.0.4-16.el8rhgs.noarch libvirt-daemon-driver-storage-gluster-6.6.0-7.module+el8.3.0+8424+5ea525c5.x86_64 glusterfs-fuse-6.0-37.1.el8rhgs.x86_64 glusterfs-rdma-6.0-37.1.el8rhgs.x86_64 gluster-ansible-repositories-1.0.1-4.el8rhgs.noarch [root@rhsqa-grafton7-nic2 ~]# imgbase w You are on rhvh-4.4.3.1-0.20201112.0+1 [root@rhsqa-grafton7-nic2 ~]# --- Additional comment from RHEL Program Management on 2020-11-16 05:11:54 UTC --- This bug is automatically being proposed for a RHHI-V 1.8.z update at Red Hat Hyperconverged Infrastructure for Virtualization product, by setting the release flag 'rhiv‑1.8.z' to '?'. --- Additional comment from SATHEESARAN on 2020-11-16 05:24:19 UTC --- This issue is seen with gluster-ansible-infra-1.0-4-16 which was included in RHVH 4.4.3-1 async. This is the regression as the fix for https://bugzilla.redhat.com/show_bug.cgi?id=1895905 included few more changes in the different task of clearing LVM filter. Included changes in the file - /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml ( along with the fix for BZ 1895905 ): <snip> - name: Check if vdsm-python package is installed or not command: rpm -q vdsm-python register: rpm_check ignore_errors: yes - name: Exclude LVM Filter rules import_tasks: lvm_exclude_filter.yml when: rpm_check.rc == 0 and gluster_infra_lvm is defined <--- though vdsm-python package is available, gluster_infra_lvm is undefined </snip> This changes makes this task to be skipped. <snip> TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ****** task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_exclude_filter.yml:2 skipping: [rhsqa-grafton7.lab.eng.blr.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [rhsqa-grafton8.lab.eng.blr.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"} skipping: [rhsqa-grafton9.lab.eng.blr.redhat.com] => {"changed": false, "skip_reason": "Conditional result was False"} </snip>
Hi Milind, Is this a end-to-end deployment or cockpit based deployment?
(In reply to Gobinda Das from comment #1) > Hi Milind, > Is this a end-to-end deployment or cockpit based deployment? Hi Gobinda Its cockpit based deployment
As the gluster deployment is successful with [node.example.com]# rpm -qa| grep -i gluster-ansible-infra gluster-ansible-infra-1.0.4-17.el8rhgs.noarch [node.example.com]# imgbase w You are on rhvh-4.4.3.1-0.20201112.0+1 moving this bug as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (gluster-ansible bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5220