Bug 1854760 - [RHV 4.3] Failure in adding hosts to the RHV default cluster as part of RHHI-V deployment
Summary: [RHV 4.3] Failure in adding hosts to the RHV default cluster as part of RHHI-...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.7
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHHI-V 1.7.z Async Update
Assignee: Gobinda Das
QA Contact: milind
URL:
Whiteboard:
Depends On: 1855283 1855361
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-08 08:14 UTC by milind
Modified: 2020-11-17 12:46 UTC (History)
3 users (show)

Fixed In Version: rhvh-4.3.11.1-0.20200713.0+1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-17 12:46:11 UTC
Embargoed:


Attachments (Terms of Use)
screenshot (79.15 KB, image/png)
2020-07-08 08:41 UTC, milind
no flags Details

Description milind 2020-07-08 08:14:27 UTC
Description of problem:
In 3 node RHHI deployment single node is shown in UI 
 
=================================
Version-Release number of selected component (if applicable):
[node]# imgbase w
You are on rhvh-4.3.11.1-0.20200701.0+1

========================================
How reproducible:
 always
=========================================
Steps to Reproduce:
 1. Do Gluster deployment 
 2. Do HE deployment
 3. check  UI Compute >> Hosts
=========================================
Actual results:
 Singe host is available in 3 node deployment 
========================================
Expected results:
 All the 3 nodes should be available 
=======================================
Additional info:

glusterfs-rdma-6.0-37.1.el7rhgs.x86_64
glusterfs-cli-6.0-37.1.el7rhgs.x86_64
glusterfs-client-xlators-6.0-37.1.el7rhgs.x86_64
gluster-ansible-roles-1.0.5-7.1.el7rhgs.noarch
glusterfs-fuse-6.0-37.1.el7rhgs.x86_64
vdsm-gluster-4.30.49-1.el7ev.x86_64
glusterfs-6.0-37.1.el7rhgs.x86_64
glusterfs-geo-replication-6.0-37.1.el7rhgs.x86_64
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch
glusterfs-events-6.0-37.1.el7rhgs.x86_64
glusterfs-libs-6.0-37.1.el7rhgs.x86_64
gluster-ansible-features-1.0.5-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-5.el7rhgs.noarch
glusterfs-server-6.0-37.1.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-36.el7.x86_64
glusterfs-api-6.0-37.1.el7rhgs.x86_64
python2-gluster-6.0-37.1.el7rhgs.x86_64

[node]#cat /etc/redhat-release 
Red Hat Enterprise Linux release 7.9

Comment 3 milind 2020-07-08 08:41:02 UTC
Created attachment 1700260 [details]
screenshot

Comment 4 SATHEESARAN 2020-07-09 02:39:13 UTC
This issue was not seen with older version of ansible - ansible-2.9.9,
but seen with ansible-2.9.10

Comment 5 SATHEESARAN 2020-07-10 10:53:58 UTC
patch posted - https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/pull/342

Comment 6 SATHEESARAN 2020-07-14 14:16:19 UTC
Fix is available with RHVH image - rhvh-4.3.11.1-0.20200713.0+1 and ovirt-ansible-hosted-engine-setup-1.0.37-1.el7ev.noarch

Comment 7 milind 2020-07-14 14:33:15 UTC
root@node1~]# rpm -qa | grep ansible 
ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch
gluster-ansible-roles-1.0.5-7.2.el7rhgs.noarch
ovirt-ansible-repositories-1.1.5-1.el7ev.noarch
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch
ansible-2.9.10-1.el7ae.noarch
gluster-ansible-features-1.0.5-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-5.el7rhgs.noarch
ovirt-ansible-hosted-engine-setup-1.0.37-1.el7ev.noarch

root@node1~]#rpm -qa | grep gluster
glusterfs-rdma-6.0-37.1.el7rhgs.x86_64
gluster-ansible-roles-1.0.5-7.2.el7rhgs.noarch
glusterfs-cli-6.0-37.1.el7rhgs.x86_64
glusterfs-client-xlators-6.0-37.1.el7rhgs.x86_64
glusterfs-fuse-6.0-37.1.el7rhgs.x86_64
vdsm-gluster-4.30.50-1.el7ev.x86_64
glusterfs-6.0-37.1.el7rhgs.x86_64
glusterfs-geo-replication-6.0-37.1.el7rhgs.x86_64
gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch
gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch
glusterfs-events-6.0-37.1.el7rhgs.x86_64
glusterfs-libs-6.0-37.1.el7rhgs.x86_64
gluster-ansible-features-1.0.5-5.el7rhgs.noarch
gluster-ansible-cluster-1.0-1.el7rhgs.noarch
gluster-ansible-infra-1.0.4-5.el7rhgs.noarch
glusterfs-server-6.0-37.1.el7rhgs.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-36.el7.x86_64
glusterfs-api-6.0-37.1.el7rhgs.x86_64
python2-gluster-6.0-37.1.el7rhgs.x86_64

[root@node1 ~]# imgbase w
You are on rhvh-4.3.11.1-0.20200713.0+1


As all hosts are UP and running.
Hence marking this bug as verified


Note You need to log in before you can comment on or make changes to this bug.