Bug 2189623
| Summary: | Compression status for cephblockpool is reported as Enabled and Disabled at the same time 4.10 | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Daniel Osypenko <dosypenk> |
| Component: | management-console | Assignee: | Nishanth Thomas <nthomas> |
| Status: | CLOSED WONTFIX | QA Contact: | Daniel Osypenko <dosypenk> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.10 | CC: | badhikar, muagarwa, ocs-bugs, odf-bz-bot, skatiyar |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.10.10 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-04-27 08:21:38 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
It is not straightforward to fix this in ODF 4.10 release. ODF UI (for BlockPool) was still part of OCP console repo which means in order to fix this we need to first send the fix to OCP 4.14 then backport it to OCP 4.13 > OCP 4.12 > OCP 4.11 > OCP 4.10 to make it finally work with ODF 4.10z release. IMO it is not worth it given it is "medium" severity issue and we can document it as know issue. @badhikar wdyt ? and it is already fixed from ODF 4.11 onwards ! |
Created attachment 1959837 [details] blockpool-compression-status Description of problem (please be detailed as possible and provide log snippests): Issue is a Duplicate of 2096414 and still exists for ODF 4.10 BlockPool compression status showed as Disabled in the list of BlockPools but shown as Enabled when opening BlockPool page. Storage efficiency is blank and shown as pending/rendering Version of all relevant components (if applicable): OC version: Client Version: 4.12.0-202208031327 Kustomize Version: v4.5.4 Server Version: 4.10.0-0.nightly-2023-04-24-171334 Kubernetes Version: v1.23.17+16bcd69 OCS verison: ocs-operator.v4.10.12 OpenShift Container Storage 4.10.12 ocs-operator.v4.10.11 Succeeded Cluster version NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2023-04-24-171334 True False 6h21m Cluster version is 4.10.0-0.nightly-2023-04-24-171334 Rook version: rook: v4.10.12-0.abc959dfc624825fe30ac1bc627c216f27d70203 go: go1.16.12 Ceph version: ceph version 16.2.7-126.el8cp (fe0af61d104d48cb9d116cde6e593b5fc8c197e4) pacific (stable) Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? no Is there any workaround available to the best of your knowledge? - Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? yes Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Open console, navigate to Storage/Data Foundation/Storage Systems select Storage System 2. Select BlockPools tab, verify compression status shown as Disabled (it should be configured so) 3. Select BlockPool from the list, verify compression status, see efficiency Actual results: Compression status for cephblockpool is reported as Enabled and Disabled at the same time Expected results: Compression status for cephblockpool is reported as Disabled due to existing configuration Additional info: No reproduction for ODF 4.13