Bug 867236
Summary: | [RHEV-RHS] VM's data from the storage nodes are not removed even after deletion of VM's from RHEVM | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | spandura |
Component: | glusterfs | Assignee: | Amar Tumballi <amarts> |
Status: | CLOSED WONTFIX | QA Contact: | spandura |
Severity: | unspecified | Docs Contact: | |
Priority: | medium | ||
Version: | 2.0 | CC: | asriram, grajaiya, rhs-bugs, shaines, vbellur, vraman |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
Cause:
In very few cases, deleting a VM from the RHEV-M results in VM being deleted from RHEV-M, but not from the actual storage.
Consequence:
As the VM file is not deleted from storage, unnecessary storage is consumed.
Workaround (if any):
In such cases, delete the VM directly from the backend (from the console).
Result:
VM Image File now gets deleted, and free space is made available.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2012-12-18 06:17:26 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
spandura
2012-10-17 06:13:22 UTC
not sure which component has bugs here... did 'unlink()' reach mountpoint at all? if it has not reached, then bug is not part of glusterfs, and if it did, then it is glusterfs bug. Need to confirm that. shwetha, can once you get the detail on forcefully removing the files, can you update if that is able to remove the files? Hi Amar, I have requested Haim in rhev-gluster mailing list to provide us pointers about how to delete VMs directly from data-base. Yet to get the reply from Haim, Waiting for this reply. will ping him once in IRC on Monday. Also we don't have the set-up to try removing the files. Since we had to continue our testing we had to remove the set-up we had for this case. Once we have the command will try to re-create the problem. Hi Amar, Got reply from Haim. Here he what he has asked us to do: please refer to the following tables: [root@test]# psql -U postgres engine -c "\d" | grep vm | grep -v view public | tags_vm_map | table | engine public | tags_vm_pool_map | table | engine public | vm_device | table | engine public | vm_dynamic | table | engine public | vm_interface | table | engine public | vm_interface_statistics | table | engine public | vm_pool_map | table | engine public | vm_pools | table | engine public | vm_static | table | engine public | vm_statistics | table | engine in specific, delete the vms from both vm_static and vm_dynamic, with DELETE FROM action. ok... after doing this, will we be actually removing the VMs? if its able to remove the VM from storage, i would close the bug as storage had little role to play here. Planning to propose it as a known issue with RHEV Image hosting, as storage couldn't do much if it doesnt get 'unlink()' call itself. (updated the doc-text) Marked for inclusion in Known Issues. Nothing much can be done in the storage layer if unlinks are not received. Documentation for Beta release includes the known issues section, meantime, no tasks from Storage layer to fix the bug. |