Bug 1467352
Summary: | [RFE] Enable an easy method to delete objects in a Ceph pool if an OSD hit full_ratio | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vimal Kumar <vikumar> |
Component: | RBD | Assignee: | Jason Dillaman <jdillama> |
Status: | CLOSED ERRATA | QA Contact: | Jason Dillaman <jdillama> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 3.0 | CC: | ceph-eng-bugs, dzafman, flucifre, hnallurv, jdillama, kchai, owasserm, vikumar, vumrao |
Target Milestone: | rc | Keywords: | FutureFeature |
Target Release: | 3.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | RHEL: ceph-12.1.2-1.el7cp Ubuntu: ceph_12.1.2-2redhat1xenial | Doc Type: | Enhancement |
Doc Text: |
.Deleting images and snapshots from full clusters is now easier
When a cluster reaches its `full_ratio`, the following commands can be used to remove Ceph Block Device images and snapshots:
* `rbd remove`
* `rbd snap rm`
* `rbd snap unprotect`
* `rbd snap purge`
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-12-05 23:35:34 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1494421 |
Description
Vimal Kumar
2017-07-03 13:30:48 UTC
This is really an rbd feature - to use the librados FORCE_FULL_TRY functionality for deletes - and removing an rbd image or snapshot when the cluster is full is possible in luminous. @Jason, any specific config settings or steps to be done before deleting rbd image or snapshot when the cluster is full? @Harish: negative -- it *should* just allow you to run the following commands when the cluster is full: "rbd remove", "rbd snap rm", "rbd snap unprotect", and "rbd snap purge". $ ceph health HEALTH_ERR full flag(s) set; 3 full osd(s) $ ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 30911M 623M 30288M 97.98 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 1 9000M 100.00 0 2262 $ rbd snap ls foo SNAPID NAME SIZE TIMESTAMP 4 1 10240 MB Tue Oct 24 21:08:11 2017 5 2 10240 MB Tue Oct 24 21:08:29 2017 6 3 10240 MB Tue Oct 24 21:08:51 2017 7 4 10240 MB Tue Oct 24 21:09:32 2017 8 5 10240 MB Tue Oct 24 21:10:01 2017 9 6 10240 MB Tue Oct 24 21:11:10 2017 10 7 10240 MB Tue Oct 24 21:14:01 2017 $ rbd snap unprotect foo@7 $ rbd snap unprotect foo@1 $ rbd snap rm foo@7 $ rbd snap rm foo@1 $ rbd snap purge foo Removing all snapshots: 100% complete...done. $ ceph health HEALTH_ERR full flag(s) set; 3 full osd(s) $ rbd rm foo Removing image: 100% complete...done. $ ceph health HEALTH_OK Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3387 |