Description of problem: --------------------------------------------------------------- I have a 2 node replica Gluster Cluster in a Datacenter which i want to remove completely. I already turned those two nodes off and wiped them out. They are now gone, but still there in my ovirt WebGui engine. So i want to remove them. 1.) I put both nodes in maintenance and removed the DataCenter and the Storage unter the Storage Tab. But now i am stuck removing the Cluster, Hosts and GlusterVolume: a) When i try to remove the Cluster: Error while executing action: Cannot remove Cluster. Host Cluster contains one or more Hosts. b) When i try to remove the hosts: node1: Cannot remove Host. Server having Gluster volume. c) When i try to remove the Volume i get: Error while executing action: Cannot stop Gluster Volume. No up server found in Cluster_II. Looks like i am trapped in a loop. Version-Release number of selected component (if applicable): --------------------------------------------------------------- ovirt-engine-sdk-python-3.5.2.1-1.el6.noarch ovirt-engine-websocket-proxy-3.5.2.1-1.el6.noarch ovirt-engine-tools-3.5.2.1-1.el6.noarch ovirt-engine-setup-plugin-ovirt-engine-3.5.2.1-1.el6.noarch ovirt-engine-extensions-api-impl-3.5.2.1-1.el6.noarch ovirt-image-uploader-3.5.1-1.el6.noarch ovirt-release35-003-1.noarch ovirt-engine-setup-plugin-ovirt-engine-common-3.5.2.1-1.el6.noarch ovirt-engine-backend-3.5.2.1-1.el6.noarch ovirt-engine-cli-3.5.0.5-1.el6.noarch ovirt-engine-lib-3.5.2.1-1.el6.noarch ovirt-engine-setup-base-3.5.2.1-1.el6.noarch ovirt-engine-setup-3.5.2.1-1.el6.noarch ovirt-iso-uploader-3.5.2-1.el6.noarch ovirt-engine-userportal-3.5.2.1-1.el6.noarch ovirt-engine-3.5.2.1-1.el6.noarch ovirt-host-deploy-1.3.1-1.el6.noarch ovirt-engine-setup-plugin-websocket-proxy-3.5.2.1-1.el6.noarch ovirt-engine-webadmin-portal-3.5.2.1-1.el6.noarch ovirt-host-deploy-java-1.3.1-1.el6.noarch ovirt-log-collector-3.5.2-1.el6.noarch ovirt-engine-dbscripts-3.5.2.1-1.el6.noarch ovirt-engine-jboss-as-7.1.1-1.el6.x86_64 ovirt-engine-restapi-3.5.2.1-1.el6.noarch Additional info: --------------------------------------------------------------- I already tried to get help here: http://lists.ovirt.org/pipermail/users/2015-July/033604.html
Anyone? :-/
Can you please add the info on how to resolve this?
To remove gluster hosts and volumes from the oVirt engine, once the cluster has been wiped out offline, follow below steps 1. Move all hosts in the cluster (where gluster service was enabled) to maintenance mode 2. Remove hosts, and check the "Force remove" checkbox that's at the bottom left of the remove host popup (This will bypass the UP server check and only remove the host entries from database. Will not try to execute the "gluster peer detach" command on the gluster node) Please try this, and close the bug if it works for you.
Created attachment 1061890 [details] "Force remove" checkbox is missing (...)Remove hosts, and check the "Force remove" checkbox that's at the bottom left of the remove host popup(..) => There is no "Force remove" checkbox
Aah. I see that this checkbox is not shown if you have both virt and gluster service enabled on cluster. This is a bug, and we will address it. Meanwhile to workaround, you could try editing the cluster and uncheck the "Enable virt service" - this will allow you to remove the hosts. Kanagaraj, can we add the force remove checkbox - in case cluster is running gluster service (with or without virt service)?
Created attachment 1061912 [details] Enable Virt Service is greyed out The "Enable Virt Service" is greyed out. I can not uncheck it. Maybe because a Template is still assigned to it?
IIRC, "Virt Service" is greyed out only when there are VMs in that cluster.
Created attachment 1062352 [details] i can not remove the vms Yes, there are VMs in that cluster. But i can not remove them due to the Storage error
Omer, do you know how the user can delete the VMs in the cluster?
(In reply to Mario Ohnewald from comment #8) > Created attachment 1062352 [details] > i can not remove the vms > > Yes, there are VMs in that cluster. But i can not remove them due to the > Storage error i assume the storage domain is still listed in the webadmin (under Storage Domains) if the storage is really gone, you can force remove the storage domain, this should also remove the associated vms
Is you issue resolved?
Created attachment 1081999 [details] CanNotRemoveStorage01
Created attachment 1082000 [details] CanNotRemoveStorage02
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Hello Omer, please have a look at my first post. I also made a little video here to explain the problem: https://www.youtube.com/watch?v=Mjib8BlIDcM In a nutshell: ° Node01+02 are dead/gone ° I was able to remove the DataCenter which they belonged to ° I was able to remove the vms which belonged to it - i am not able to remove the Hosts - i am not able to remove the Volume
Ok, in comment 10 i was talking about storage domains and vms, not gluster volumes and hosts. so i assume Sahina's patch should solve this (at least the hosts, not sure about the volume) please see https://gerrit.ovirt.org/#/c/45848/ Sahina, if we want this in 3.6 i assume a backport is needed?
this is an automated message. oVirt 3.6.0 RC3 has been released and GA is targeted to next week, Nov 4th 2015. Please review this bug and if not a blocker, please postpone to a later release. All bugs not postponed on GA release will be automatically re-targeted to - 3.6.1 if severity >= high - 4.0 if severity < high
(In reply to Omer Frenkel from comment #16) > Ok, in comment 10 i was talking about storage domains and vms, not gluster > volumes and hosts. > so i assume Sahina's patch should solve this (at least the hosts, not sure > about the volume) It should remove the gluster volumes from engine database as well - if force remove is being performed. > please see > https://gerrit.ovirt.org/#/c/45848/ > > Sahina, if we want this in 3.6 i assume a backport is needed? Yes, a backport to 3.6 branch has been submitted
This bug is not marked for z-stream, yet the milestone is for a z-stream version, therefore the milestone has been reset. Please set the correct milestone or add the z-stream flag.
Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA.
Tested with RHEV 3.6.3.3 and RHGS 3.1.2 RC ( glusterfs-3.7.5-19.el7rhgs ) 1. Created a gluster + virt cluster 2. Added few hypervisors with gluster installed on then to 3.5 cluster compatibility 3. Ramdomly powered off the node and tried to remove them from the cluster Observation is the after the host was moved to the maintenance, "Force remove" option was available to remove the host from the cluster, even the volume existing