Bug 2170925
| Summary: | OSD prepare job fails with KeyError: 'KNAME' | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Veera Raghava Reddy <vereddy> |
| Component: | Ceph-Volume | Assignee: | Guillaume Abrioux <gabrioux> |
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
| Severity: | urgent | Docs Contact: | Akash Raj <akraj> |
| Priority: | urgent | ||
| Version: | 6.0 | CC: | adking, akraj, bkunal, branto, ceph-eng-bugs, cephqe-warriors, gabrioux, kdreyer, msaini, muagarwa, sapillai, sostapov, vamahaja, vereddy |
| Target Milestone: | --- | ||
| Target Release: | 6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-17.2.5-69 | Doc Type: | No Doc Update |
| Doc Text: |
Cause:
Consequence:
Fix:
Result:
Rook has a specific use case where devices are copied in /mnt
If the basename (in /mnt) is different from the original device name, then the current logic can't match it.
The idea is to append the device to the lsblk command and return the result.
|
Story Points: | --- |
| Clone Of: | 2170812 | Environment: | |
| Last Closed: | 2023-03-20 19:00:36 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 2170812 | ||
| Bug Blocks: | |||
|
Comment 1
Scott Ostapovicz
2023-02-22 16:19:57 UTC
With the 6.0 SPIN (not SPOON) being pushed out, retargeting to 6.0 to try and get this in. Hi Guillaume, In last week program meeting discused to backport this BZ to 6.0. Can you update this can be done? Hi Manisha, we have already had the confirmation from our QE that this issue is now fixed in the latest RHCS 6.0 image, you can move this to verified! Regards, Boris Thankyou Boris. Based on comment#10 , moving this BZ to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360 |