Bug 888987
Summary: | Volume stuck in error_deleting | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Graeme Gillies <ggillies> |
Component: | python-cinderclient | Assignee: | Eric Harney <eharney> |
Status: | CLOSED ERRATA | QA Contact: | Attila Fazekas <afazekas> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 2.1 | CC: | eharney, fpercoco |
Target Milestone: | snapshot4 | Keywords: | Triaged |
Target Release: | 2.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 1.0.0.20-3 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-03-21 19:03:58 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 896153 | ||
Bug Blocks: |
Description
Graeme Gillies
2012-12-19 23:35:18 UTC
Cinder only allows deletion when the volume is in certain states -- not including error_deleting. When this happens, you can remove the volume by logging into the database, removing it from the volumes table (which will involve updating reservations and quota_usages as well), and also removing the LV created for it. The Cinder volume service will need to be restarted as well. It may be easier to update the database's state for the volume to "error" instead of "error_deleting" and try again (possibly after restarting services) if you think the delete will succeed. I can look into how to improve this -- I think in these cases the ability to retry deletion is needed. (One concern though, is that we can't do so in such a way that allows volumes to be deleted w/o secure delete succeeding.) I ran the following sql statement mysql> update volumes set status = 'error' where id = '3dc53676-81e7-4b5e-9b3c-f4603d2b2846'; Query OK, 1 row affected (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 0 Then did cinder delete 3dc53676-81e7-4b5e-9b3c-f4603d2b2846 And that seems to have fixed it. Thanks for the help. Up to you if you want to close this ticket or leave it open as a reminder to add a section to the doco on what to do when this happens Regards, Graeme (In reply to comment #2) > I can look into how to improve this -- I think in these cases the ability to > retry deletion is needed. (One concern though, is that we can't do so in > such a way that allows volumes to be deleted w/o secure delete succeeding.) What about a --force option that _forces_ certain commands like this one? (In reply to comment #4) > (In reply to comment #2) > > I can look into how to improve this -- I think in these cases the ability to > > retry deletion is needed. (One concern though, is that we can't do so in > > such a way that allows volumes to be deleted w/o secure delete succeeding.) > > What about a --force option that _forces_ certain commands like this one? I've considered this, but I'm not sure it's that straightforward. A --force option would let you force a delete from a different state -- but it shouldn't always force a delete through, since whatever is causing the failure may cause a leak of some storage resource that doesn't get cleaned up, etc. (And secure delete must be ensured.) Manual intervention may be preferred. So, I think a better first step would be to add error_deleting to the list of states that a delete operation is allowed from, which would at least help in a case like this one, as the user could take some action, then retry. I'll try this out. Looks like some movement upstream is happening with this. The current plan here is to backport the "force_delete" operation from Grizzly which will allow users a method to clean up in situations like this, where "delete" is not allowed. You can now run "cinder force-delete <vol-id>" as an admin user. Looks like just very few thing is able to cause error_delete state nowadays. I renamed the /bin/dd in order to reach that state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0672.html |