Bug 2178033
Summary: | node topology warnings tab doesn't show pod warnings | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Daniel Osypenko <dosypenk> |
Component: | management-console | Assignee: | Bipul Adhikari <badhikar> |
Status: | CLOSED ERRATA | QA Contact: | Daniel Osypenko <dosypenk> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.13 | CC: | badhikar, ebenahar, muagarwa, nberry, ocs-bugs, odf-bz-bot, skatiyar |
Target Milestone: | --- | ||
Target Release: | ODF 4.13.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 4.13.0-172 | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-06-21 15:24:28 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Daniel Osypenko
2023-03-14 09:40:20 UTC
@badhikar I need to add to this BZ, that the cluster itself should present Alerts gathered from the nodes. I see that now the ode presents alerts from the hosted pods, but the cluster does not present the alerts from the nodes. ODF Server Version: 4.13.0-0.nightly-2023-03-17-161027 Kubernetes Version: v1.26.2+06e8c46 (In reply to Daniel Osypenko from comment #2) > @badhikar I need to add to this BZ, that the cluster itself > should present Alerts gathered from the nodes. > I see that now the ode presents alerts from the hosted pods, but the cluster > does not present the alerts from the nodes. > > ODF > Server Version: 4.13.0-0.nightly-2023-03-17-161027 > Kubernetes Version: v1.26.2+06e8c46 After going through the code and trying multiple iterations, it is not possible to show alerts of the Node at the OCS level and still make some sort of sense. After the node degrades and it finally affects the storage cluster then an alert should be generated on the OCS component which in turn would be shown on the Storage Cluster group level. Just combining node alerts on the Cluster Group level can be confusing. How can it be confusing? ==> We show the Cluster in a warning state because one of the Nodes is in warning due to some alert. When the user clicks the Storage Cluster group and opens the sidebar to see what's wrong he will not see any Alerts. This behavior can be more confusing than helpful. WDYT? (In reply to Bipul Adhikari from comment #5) > After going through the code and trying multiple iterations, it is not > possible to show alerts of the Node at the OCS level and still make some > sort of sense. After the node degrades and it finally affects the storage > cluster then an alert should be generated on the OCS component which in turn > would be shown on the Storage Cluster group level. Just combining node > alerts on the Cluster Group level can be confusing. How can it be confusing? > ==> We show the Cluster in a warning state because one of the Nodes is in > warning due to some alert. When the user clicks the Storage Cluster group > and opens the sidebar to see what's wrong he will not see any Alerts. This > behavior can be more confusing than helpful. WDYT? If the question is, whenever the cluster is in warning state do we want to show the node/nodes warnings I think the logic should comply with the rest of topology states and transitions - to show every underlying warning. If the alert generation for the Storage Cluster group is a separate process and may not happen or have significant delay, we need to notify the user whenever user selects Storage Cluster group about that and show no warnings until they will be generated. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742 |