Bug 1946478
| Summary: | [GSS][ceph-volume]"ceph-volume lvm batch" shows "% of device" as 0% for DB device | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Janmejay Singh <jansingh> |
| Component: | Ceph-Volume | Assignee: | Andrew Schoen <aschoen> |
| Status: | CLOSED ERRATA | QA Contact: | Rahul Lepakshi <rlepaksh> |
| Severity: | medium | Docs Contact: | Ranjini M N <rmandyam> |
| Priority: | unspecified | ||
| Version: | 4.2 | CC: | agunn, aschoen, ceph-eng-bugs, ceph-qe-bugs, gmeno, gsitlani, kdreyer, mmuench, pdhange, rlepaksh, rmandyam, sunnagar, vereddy |
| Target Milestone: | --- | ||
| Target Release: | 5.1 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.6-1.el8cp ceph-16.2.6-1.el7cp | Doc Type: | Bug Fix |
| Doc Text: |
.The `ceph-volume lvm batch --report` now displays the correct value of `% of device` parameter
Previously, users would not get correct information on the amount of space available on the device for `--db-devices` as `ceph-volume` would incorrectly calculate the `% of device` value for the DB devices when running the `ceph-volume lvm batch --report` command.
With this release, the `ceph-volume lvm batch --report` command correctly shows the space that is used on each device given in `--db-devices`.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-04-04 10:20:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2031073 | ||
|
Description
Janmejay Singh
2021-04-06 08:35:13 UTC
https://tracker.ceph.com/issues/51107 is resolved in v14.2.22 upstream, and we've rebased to that version, so I'm moving this to ON_QA. (Er, that previous comment was for RHCS 4.3). For RHCS 5.1: https://tracker.ceph.com/issues/51108 is resolved in v16.2.5 upstream, and we've rebased ceph-5.1-rhel-patches to v16.2.6, so I'm moving this to ON_QA. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174 |