Description of problem: In 3 node RHHI deployment single node is shown in UI ================================= Version-Release number of selected component (if applicable): [node]# imgbase w You are on rhvh-4.3.11.1-0.20200701.0+1 ======================================== How reproducible: always ========================================= Steps to Reproduce: 1. Do Gluster deployment 2. Do HE deployment 3. check UI Compute >> Hosts ========================================= Actual results: Singe host is available in 3 node deployment ======================================== Expected results: All the 3 nodes should be available ======================================= Additional info: glusterfs-rdma-6.0-37.1.el7rhgs.x86_64 glusterfs-cli-6.0-37.1.el7rhgs.x86_64 glusterfs-client-xlators-6.0-37.1.el7rhgs.x86_64 gluster-ansible-roles-1.0.5-7.1.el7rhgs.noarch glusterfs-fuse-6.0-37.1.el7rhgs.x86_64 vdsm-gluster-4.30.49-1.el7ev.x86_64 glusterfs-6.0-37.1.el7rhgs.x86_64 glusterfs-geo-replication-6.0-37.1.el7rhgs.x86_64 gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch glusterfs-events-6.0-37.1.el7rhgs.x86_64 glusterfs-libs-6.0-37.1.el7rhgs.x86_64 gluster-ansible-features-1.0.5-5.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-5.el7rhgs.noarch glusterfs-server-6.0-37.1.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-36.el7.x86_64 glusterfs-api-6.0-37.1.el7rhgs.x86_64 python2-gluster-6.0-37.1.el7rhgs.x86_64 [node]#cat /etc/redhat-release Red Hat Enterprise Linux release 7.9
Created attachment 1700260 [details] screenshot
This issue was not seen with older version of ansible - ansible-2.9.9, but seen with ansible-2.9.10
patch posted - https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/342
Fix is available with RHVH image - rhvh-4.3.11.1-0.20200713.0+1 and ovirt-ansible-hosted-engine-setup-1.0.37-1.el7ev.noarch
root@node1~]# rpm -qa | grep ansible ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch gluster-ansible-roles-1.0.5-7.2.el7rhgs.noarch ovirt-ansible-repositories-1.1.5-1.el7ev.noarch gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch ansible-2.9.10-1.el7ae.noarch gluster-ansible-features-1.0.5-5.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-5.el7rhgs.noarch ovirt-ansible-hosted-engine-setup-1.0.37-1.el7ev.noarch root@node1~]#rpm -qa | grep gluster glusterfs-rdma-6.0-37.1.el7rhgs.x86_64 gluster-ansible-roles-1.0.5-7.2.el7rhgs.noarch glusterfs-cli-6.0-37.1.el7rhgs.x86_64 glusterfs-client-xlators-6.0-37.1.el7rhgs.x86_64 glusterfs-fuse-6.0-37.1.el7rhgs.x86_64 vdsm-gluster-4.30.50-1.el7ev.x86_64 glusterfs-6.0-37.1.el7rhgs.x86_64 glusterfs-geo-replication-6.0-37.1.el7rhgs.x86_64 gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch glusterfs-events-6.0-37.1.el7rhgs.x86_64 glusterfs-libs-6.0-37.1.el7rhgs.x86_64 gluster-ansible-features-1.0.5-5.el7rhgs.noarch gluster-ansible-cluster-1.0-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-5.el7rhgs.noarch glusterfs-server-6.0-37.1.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-36.el7.x86_64 glusterfs-api-6.0-37.1.el7rhgs.x86_64 python2-gluster-6.0-37.1.el7rhgs.x86_64 [root@node1 ~]# imgbase w You are on rhvh-4.3.11.1-0.20200713.0+1 As all hosts are UP and running. Hence marking this bug as verified