Bug 1715952
| Summary: | Single node RHHI-V deployment, results in the host added twice to the cluster, one with backend and other with frontend FQDN | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> | ||||
| Component: | rhhi | Assignee: | Sahina Bose <sabose> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Mugdha Soni <musoni> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | rhhiv-1.6 | CC: | godas, rhs-bugs | ||||
| Target Milestone: | --- | Keywords: | ZStream | ||||
| Target Release: | RHHI-V 1.6.z Async Update | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: |
Previously, a hyperconverged host was added to the Red Hat Virtualization default cluster using both its front-end and its back-end FQDN, which led to the back-end deployment being marked as a failure. Now, only the front-end FQDN is used, resolving this issue.
|
Story Points: | --- | ||||
| Clone Of: | |||||||
| : | 1715959 (view as bug list) | Environment: | |||||
| Last Closed: | 2019-10-03 12:23:57 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | 1715959 | ||||||
| Bug Blocks: | |||||||
| Attachments: |
|
||||||
|
Description
SATHEESARAN
2019-05-31 18:00:41 UTC
There exists the simple workaround of removing the host with 'Install Failed' status from the cluster Created attachment 1611476 [details]
Screenshot for verification
Tested with the following:- 1. rhvh-4.3.6.2-0.20190821 2.gluster-ansible-repositories-1.0.1-1.el7rhgs.noarch glusterfs-cli-3.12.2-47.4.el7rhgs.x86_64 glusterfs-rdma-3.12.2-47.4.el7rhgs.x86_64 glusterfs-3.12.2-47.4.el7rhgs.x86_64 glusterfs-api-3.12.2-47.4.el7rhgs.x86_64 python2-gluster-3.12.2-47.4.el7rhgs.x86_64 gluster-ansible-maintenance-1.0.1-1.el7rhgs.noarch glusterfs-events-3.12.2-47.4.el7rhgs.x86_64 vdsm-gluster-4.30.29-2.el7ev.x86_64 glusterfs-libs-3.12.2-47.4.el7rhgs.x86_64 gluster-ansible-cluster-1.0-1.el7rhgs.noarch gluster-ansible-infra-1.0.4-3.el7rhgs.noarch gluster-ansible-roles-1.0.5-4.el7rhgs.noarch glusterfs-fuse-3.12.2-47.4.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-47.4.el7rhgs.x86_64 gluster-ansible-features-1.0.5-3.el7rhgs.noarch glusterfs-client-xlators-3.12.2-47.4.el7rhgs.x86_64 3.cockpit-ovirt-dashboard-0.13.6-1.el7ev After the successful single node deployment, when logged into RHVM could find one host in the cluster, added using front-end FQDN. Hence ,moving the bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2963 |