Description of problem: ----------------------- RHHI-V automated deployment using gluster-ansible and ovirt-ansible roles fails for the very first time. Then this works good when redploying. This issue is not seen when deploying using cockpit Version-Release number of selected component (if applicable): -------------------------------------------------------------- RHVH 4.3.8 gluster-6.0-29 ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch ovirt-ansible-repositories-1.1.5-1.el7ev.noarch ovirt-ansible-hosted-engine-setup-1.0.32-1.el7ev.noarch gluster-ansible-roles-1.0.5-7.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-features-1.0.5-5.el7rhgs.noarch gluster-ansible-infra-1.0.4-5.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch How reproducible: ------------------ Always Steps to Reproduce: -------------------- 1. Provision 3 machines with RHVH 4.3.8 installed on them 2. Start with RHHI-V deployment in an automated way using gluster-ansible roles Actual results: ---------------- Gluster configuration was successful, though the Hosted Engine (HE) deployment fails Expected results: ------------------ Gluster configuration and Hosted Engine (HE) deployment should be successful
This issue is not the blocker, but can be treated as known_issue for now. 1. When starting with automated deployment with ansible roles, Hosted Engine deployment will fail. 2. Clean the ovirt-hosted-engine setup on the host from which deployment was attempted # ovirt-hosted-engine-cleanup -q 3. Set the hostname corresponding to front-end FQDN. # hostnamectl set-hostname <Front-end-FQDN> 4. Restart only the Hosted Engine deployment. Existing gluster deployment can still be used # ansible-playbook -i /root/gluster_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/he_deployment.yml --extra-vars='@/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json'
This issue is mainly because of the AVC thrown during the addition of webhook. I have added the webhook prior to deployment by hand and this failure is not seen
RDT looks good.
I've had the same issue with my home RHV 4.3 / RHHI-1.7 lab cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment ansible-playbook -i gluster_inventory.yml tasks/gluster_deployment.yml --extra-vars='@he_gluster_vars.json' ansible-playbook -i gluster_inventory.yml tasks/he_deployment.yml --extra-vars='@he_gluster_vars.json' Install fails with TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain] ****************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Error: Fault reason is "Operation Failed". Fault detail is "[Cannot add Storage Connection. Host nuc2.redpill.nz cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host.]". HTTP response code is 409. fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot add Storage Connection. Host nuc2.redpill.nz cannot connect to Glusterfs. Verify that glusterfs-cli package is installed on the host.]\". HTTP response code is 409."} Then run ovirt-hosted-engine-cleanup ansible-playbook -i gluster_inventory.yml tasks/he_deployment.yml --extra-vars='@he_gluster_vars.json' And the hosted engine is deployed correctly. Can you provide details on how to create the webhook manually to avoid the issue
Targeting this bug for RHHI-V 1.8
Verified with RHVH 4.4.1 with gluster-ansible-roles-1.0.5-12.el8rhgs CLI based RHHI-V deployment using ansible playbooks works to deploy RHHI-V successfully
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:3314