Bug 1727883
| Summary: | [RFE] support scheduling background tasks for long-running RBD operations | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Jason Dillaman <jdillama> |
| Component: | RBD | Assignee: | Jason Dillaman <jdillama> |
| Status: | CLOSED ERRATA | QA Contact: | Gopi <gpatta> |
| Severity: | high | Docs Contact: | Bara Ancincova <bancinco> |
| Priority: | high | ||
| Version: | 4.0 | CC: | assingh, ceph-eng-bugs, ceph-qe-bugs, gpatta, mkasturi, mrajanna, pasik, srangana, tserlin, ykaul |
| Target Milestone: | rc | Keywords: | FutureFeature |
| Target Release: | 4.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-14.2.3-2.el8cp | Doc Type: | Enhancement |
| Doc Text: |
.Long-running RBD operations can run in the background
Long-running RBD operations, such as image removal or cloned image flattening, can now be scheduled to run in the background. RBD operations that involve iterating over every backing RADOS object for the image can take a long time depending on the size of the image. When using the CLI to perform one of these operations, the `rbd` CLI is blocked until the operation is complete. These operations can now be scheduled to run by the Ceph Manager as a background task by using the `ceph rbd task add` commands. The progress of these tasks is visible on the Ceph dashboard as well as by using the CLI.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-01-31 12:46:20 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1726266, 1730176 | ||
|
Description
Jason Dillaman
2019-07-08 12:51:43 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri PR merged into v14.2.3 release: https://github.com/ceph/ceph/pull/29725 Verified on latest build and it's working as expected.
[root@magna006 ceph]# ceph rbd task add trash remove g_pool/161a30e215ed
{"sequence": 7, "id": "39cbe04b-4369-4c62-8e59-2e484cdf3459", "message": "Removing image g_pool/161a30e215ed from trash", "refs": {"action": "trash remove", "pool_name": "g_pool", "pool_namespace": "", "image_id": "161a30e215ed"}}
[root@magna006 ceph]#
[root@magna006 ceph]#
[root@magna006 ceph]#
[root@magna006 ceph]# ceph progress
[Complete]: Removing image g_pool/image1
[============================]
[Complete]: Removing image g_pool/image2
[============================]
[Complete]: Removing image g_pool/image3
[============================]
[Complete]: Removing image g_pool/image4
[============================]
[Complete]: Removing image g_pool/image5
[============================]
[Complete]: Removing image g_pool/sample_image1
[============================]
[Complete]: Removing image g_pool/161a30e215ed from trash
[============================]
[root@magna006 ceph]#
[root@magna006 ceph]# ceph -v
ceph version 14.2.4-91.el8cp (23607558df3b077b6190cdf96cd8d9043aa2a1c5) nautilus (stable)
ceph-mon-14.2.4-91.el8cp.x86_64
ceph-ansible-4.0.6-1.el8cp.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312 |