Bug 1438386
Summary: | [RFE] - provide a way for the user to replace host which has both virt and gluster services enabled from UI | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | RamaKasturi <knarra> | ||||||||
Component: | BLL.Gluster | Assignee: | Prajith <pkesavap> | ||||||||
Status: | CLOSED DEFERRED | QA Contact: | SATHEESARAN <sasundar> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 4.1.1.2 | CC: | bugs, godas, knarra, mtessun, pkesavap, sasundar | ||||||||
Target Milestone: | --- | Flags: | pm-rhel:
ovirt-4.4+
|
||||||||
Target Release: | --- | ||||||||||
Hardware: | Unspecified | ||||||||||
OS: | Unspecified | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2021-01-11 05:27:18 UTC | Type: | Bug | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Bug Depends On: | |||||||||||
Bug Blocks: | 1633126 | ||||||||||
Attachments: |
|
Description
RamaKasturi
2017-04-03 10:09:30 UTC
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1. Sachi, does the replace host role handle the case of SSL certificates too? (In reply to Sahina Bose from comment #3) > Sachi, does the replace host role handle the case of SSL certificates too? No, it doesn't as of now. The role has been provided and integrated with rhhi-engine side. The role is embedded in reinstall flow from rhhi-engine. The role requires 3 parameters and a condition of gluster being supported in the cluster and a minimum number of 3 host in the same gluster supported cluster needs to be satisfied to run the playbook (i) the oldNode: the node which is needed to reinstalled (ii) the clusterNode_1: the maintenance node (iii) the clusterNode_2: the second maintenance node As of now this reconfigure gluster role is triggered if the cluster is gluster supported. It is included with reinstall flow , i.e every time a node reinstalled from the rhhi-engine side , the node's gluster gets reconfigured and gets added back to gluster peer network. (A front end checkbox is under WIP , which enables customers to choose whether the selected node needs to gluster reconfigured while reinstalling, meaning the reconfigureGluster is only called if the customer checks the checkbox as true , else the node will be reinstalled as usual) What the roles does :- The role basically deletes the existing gluster directory and remove the affected node from the gluster peer network. The role then gets the peer info and other details of the corrupted node from the other two nodes and reconfigure it, and then add it back to the gluster peer network, after which self heal happens and all the nodes will be in sync. Moving to QE, assuming that the fixes are included in ovirt-engine-4.4.0-0.0.master.20200315172200.gitb5b5c99ca2f since bug moved to modified on 2020-03-11 but no patch is linked from this bug. (In reply to Sandro Bonazzola from comment #6) > Moving to QE, assuming that the fixes are included in > ovirt-engine-4.4.0-0.0.master.20200315172200.gitb5b5c99ca2f since bug moved > to modified on 2020-03-11 but no patch is linked from this bug. This feature failed qualification as the feature doesn't consider the presence of 2 networks and also misses the step to perform 'gluster volume replace-brick' These changes will be accomodated with the slight changes in the design to translate the front-end FQDN/IP to back-end FQDN/IP. So we are in agreement to fix this bug for ovirt-4.4.z The existing RHV Manager UI to 'Reconfigure Gluster' during the 'reinstallation' of the host to the cluster will be disabled and tracked as part of this bug - https://bugzilla.redhat.com/show_bug.cgi?id=1840083 This bug is not marked as blocker and should be re-targeted to 4.4.3 (In reply to Sandro Bonazzola from comment #8) > This bug is not marked as blocker and should be re-targeted to 4.4.3 Hi Sandro, Yes, this bug is not a blocker for ovirt-4.4.2. Moving it to 4.4.3 rechanging the status to assigned since the code was reverted due to jenkins OST break,ref :- https://gerrit.ovirt.org/#/c/111027/ Tested with RHV 4.4.3 (4.4.3.12-0.1.el8ev), but the replacing the host procedure failed. Tested with the following steps: 1. Create a 3 node RHHI-V deployment 2. Created a separate gluster network for gluster and attached that HC hosts 3. Simulated a failure of one of the node, by abruptly turning off that node 4. Reinstalled the OS on that node, without formatting the other disks 5. Copied the authorized_keys from other hosts to the newly installed host 6. Removed the LVM filter from /etc/lvm/lvm.conf 7. Performed replace host procedure from RHV admin portal and that failed. The relevant logs will be attached soon. As this bug is not a blocker for the release, retargeting this bug for RHV 4.4.4 The replace host playbook still works good to replacing the same host Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release. Created attachment 1731314 [details]
host-deploy logfile
Created attachment 1731315 [details]
engine.log
Created attachment 1731316 [details]
vdsm.log
For now, moving this bug out of ovirt-4.4.4 as decided in the team meeting. @Gobinda, could you close this bug, removing all the acks.. Closing this bug as righ now we don't have plan for RFE. |