Bug 1244935
Summary: | Force remove is missing in hosts tab in mixed gluster\virt mode. | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Mario Ohnewald <mo> | ||||||||||||
Component: | Frontend.WebAdmin | Assignee: | Sahina Bose <sabose> | ||||||||||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> | ||||||||||||
Severity: | high | Docs Contact: | |||||||||||||
Priority: | unspecified | ||||||||||||||
Version: | --- | CC: | bugs, gklein, kmayilsa, lsurette, mgoldboi, michal.skrivanek, mo, rbalakri, sabose, yeylon, ykaul, ylavi | ||||||||||||
Target Milestone: | ovirt-3.6.1 | Flags: | ylavi:
ovirt-3.6.z?
ylavi: planning_ack? rule-engine: devel_ack+ rule-engine: testing_ack+ |
||||||||||||
Target Release: | 3.6.1 | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Whiteboard: | |||||||||||||||
Fixed In Version: | 3.6.1 | Doc Type: | Bug Fix | ||||||||||||
Doc Text: |
Cause: Hide Force remove option in mixed mode cluster
Consequence: Unable to clean out clusters when all hosts are in maintenance mode
Fix: Enable option to remove force remove host, if gluster service is enabled
Result: Able to clean up cluster in engine, when underlying cluster is cleaned up/ not used.
|
Story Points: | --- | ||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2016-03-11 07:19:36 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Attachments: |
|
Description
Mario Ohnewald
2015-07-20 19:36:01 UTC
Anyone? :-/ Can you please add the info on how to resolve this? To remove gluster hosts and volumes from the oVirt engine, once the cluster has been wiped out offline, follow below steps 1. Move all hosts in the cluster (where gluster service was enabled) to maintenance mode 2. Remove hosts, and check the "Force remove" checkbox that's at the bottom left of the remove host popup (This will bypass the UP server check and only remove the host entries from database. Will not try to execute the "gluster peer detach" command on the gluster node) Please try this, and close the bug if it works for you. Created attachment 1061890 [details]
"Force remove" checkbox is missing
(...)Remove hosts, and check the "Force remove" checkbox that's at the bottom
left of the remove host popup(..)
=> There is no "Force remove" checkbox
Aah. I see that this checkbox is not shown if you have both virt and gluster service enabled on cluster. This is a bug, and we will address it. Meanwhile to workaround, you could try editing the cluster and uncheck the "Enable virt service" - this will allow you to remove the hosts. Kanagaraj, can we add the force remove checkbox - in case cluster is running gluster service (with or without virt service)? Created attachment 1061912 [details]
Enable Virt Service is greyed out
The "Enable Virt Service" is greyed out. I can not uncheck it. Maybe because a Template is still assigned to it?
IIRC, "Virt Service" is greyed out only when there are VMs in that cluster. Created attachment 1062352 [details]
i can not remove the vms
Yes, there are VMs in that cluster. But i can not remove them due to the Storage error
Omer, do you know how the user can delete the VMs in the cluster? (In reply to Mario Ohnewald from comment #8) > Created attachment 1062352 [details] > i can not remove the vms > > Yes, there are VMs in that cluster. But i can not remove them due to the > Storage error i assume the storage domain is still listed in the webadmin (under Storage Domains) if the storage is really gone, you can force remove the storage domain, this should also remove the associated vms Is you issue resolved? Created attachment 1081999 [details]
CanNotRemoveStorage01
Created attachment 1082000 [details]
CanNotRemoveStorage02
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release. Hello Omer, please have a look at my first post. I also made a little video here to explain the problem: https://www.youtube.com/watch?v=Mjib8BlIDcM In a nutshell: ° Node01+02 are dead/gone ° I was able to remove the DataCenter which they belonged to ° I was able to remove the vms which belonged to it - i am not able to remove the Hosts - i am not able to remove the Volume Ok, in comment 10 i was talking about storage domains and vms, not gluster volumes and hosts. so i assume Sahina's patch should solve this (at least the hosts, not sure about the volume) please see https://gerrit.ovirt.org/#/c/45848/ Sahina, if we want this in 3.6 i assume a backport is needed? this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high (In reply to Omer Frenkel from comment #16) > Ok, in comment 10 i was talking about storage domains and vms, not gluster > volumes and hosts. > so i assume Sahina's patch should solve this (at least the hosts, not sure > about the volume) It should remove the gluster volumes from engine database as well - if force remove is being performed. > please see > https://gerrit.ovirt.org/#/c/45848/ > > Sahina, if we want this in 3.6 i assume a backport is needed? Yes, a backport to 3.6 branch has been submitted This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset. Please set the correct milestone or add the z-stream flag. Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA. Tested with RHEV 3.6.3.3 and RHGS 3.1.2 RC ( glusterfs-3.7.5-19.el7rhgs ) 1. Created a gluster + virt cluster 2. Added few hypervisors with gluster installed on then to 3.5 cluster compatibility 3. Ramdomly powered off the node and tried to remove them from the cluster Observation is the after the host was moved to the maintenance, "Force remove" option was available to remove the host from the cluster, even the volume existing |