Bug 1989527
Summary: | RBD: `rbd info` cmd on rbd images on which flattening is in progress throws ErrImageNotFound | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Mudit Agarwal <muagarwa> |
Component: | RBD | Assignee: | Ilya Dryomov <idryomov> |
Status: | CLOSED ERRATA | QA Contact: | Preethi <pnataraj> |
Severity: | high | Docs Contact: | Akash Raj <akraj> |
Priority: | high | ||
Version: | 5.0 | CC: | akraj, bbenshab, bniver, ceph-eng-bugs, danken, dholler, fdeutsch, guchen, hchiramm, idryomov, madam, mhackett, mrajanna, muagarwa, ndevos, ocs-bugs, owasserm, pelauter, pnataraj, rar, sostapov, tserlin, vashastr, vereddy |
Target Milestone: | --- | ||
Target Release: | 5.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.10-37.el8cp | Doc Type: | Bug Fix |
Doc Text: |
.`rbd info` command no longer fails if executed when the image is being flattened
Previously, due to an implementation defect, `rbd info` command would fail, although rarely, if run when the image was being flattened. This caused a transient _No such file or directory_ error to occur, although, upon rerun, the command always succeeded.
With this fix, the implementation defect is fixed and `rbd info` command no longer fails even if executed when the image is being flattened.
|
Story Points: | --- |
Clone Of: | 1989521 | Environment: | |
Last Closed: | 2023-01-11 17:38:53 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1989521, 2039269, 2049202, 2126049 |
Description
Mudit Agarwal
2021-08-03 12:02:19 UTC
Ilya, can this be considered for 5.0z1? AFAIK, this is not that urgent we can wait till 5.0z2 but I may be wrong. Rakshith, do you have any thoughts. Is it ok if we don't fix it in 4.9? Putting the target release as 5.0z2 based on the above conversation, please re-target if required. Not completed in time for 5.0 z4, moving to 5.1 *** Bug 2049202 has been marked as a duplicate of this bug. *** I am running performance with CNV 4.9.3 and looks like I reproduced the issue : Created sequential VMS from golden image, 10 seconds apart, and after ~450 VMS the snapshot started to get stuck - My system DV : [kni@f12-h17-b07-5039ms ~]$ oc get dv -A | grep -c Succeeded 468 [kni@f12-h17-b07-5039ms ~]$ oc get dv -A | grep -c SnapshotForSmartCloneInProgress 21 [kni@f12-h17-b07-5039ms ~]$ oc get dv -A | grep -c CloneScheduled 12 Please advise if additional information is needed for debugging. This has been an issue for some time, and will definitely impact VMs deployements at scale. This is a high priority defect for us, please advise how we can help get more progress on fixing this, before it become a fire drill in a production cluster. We are past the code freeze date for 5.1 z1, but lets consider this one a blocker/exception. Any update on this BZ. When can we expect to be ON_QA. We are close to Test phase completion. We need it by 6th for QE to verify this part of 5.1Z1 release. We can no longer hold the 5.1 z1 release for this one. Any update on this BZ. When can we expect to be ON_QA. Note this is NOT a DR issue. We will leave this here for now, but this may be moved to 5.3 z1 if there is not enough extra time to complete this. Yes, this is not a DR issue but we are hitting it very frequently in upstream ci. Its is required for one of our features in 4.12, this is also causing of delays in perf test with CNV team. If we don't fix it will leave a lot of rbd stale resources. Can we please target it for 5.3 only. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security update and Bug Fix), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0076 *** Bug 2049202 has been marked as a duplicate of this bug. *** |