Back to bug 1984735

Who When What Removed Added
umanga 2021-07-22 11:50:01 UTC CC uchapaga
Mudit Agarwal 2021-07-26 09:58:32 UTC CC muagarwa
Flags needinfo?(uchapaga)
umanga 2021-07-26 10:24:10 UTC Flags needinfo?(uchapaga) needinfo?(sagrawal)
Neha Berry 2021-07-26 10:56:16 UTC CC nberry
Flags needinfo?(muagarwa)
Sidhant Agrawal 2021-07-26 11:38:47 UTC Flags needinfo?(sagrawal)
Mudit Agarwal 2021-07-26 13:04:14 UTC Flags needinfo?(muagarwa)
arun kumar mohan 2021-07-27 17:52:32 UTC CC amohan
Assignee jrivera amohan
arun kumar mohan 2021-07-27 20:52:04 UTC Status NEW POST
Mudit Agarwal 2021-07-28 11:30:04 UTC Doc Type If docs needed, set a value Known Issue
Mudit Agarwal 2021-07-28 11:30:36 UTC Blocks 1966894
Mudit Agarwal 2021-07-28 11:47:17 UTC Doc Text Cause: Monitoring spec getting empty whenever ocs-operator is restarted.

Consequence: Monitoring spec will suddenly become empty during upgrade or ocs-operator restart.

Result: This has no impact on the functionality but someone looking at the monitoring endpoint will find it empty.
Olive Lakra 2021-07-28 14:31:48 UTC CC olakra
Doc Text Cause: Monitoring spec getting empty whenever ocs-operator is restarted.

Consequence: Monitoring spec will suddenly become empty during upgrade or ocs-operator restart.

Result: This has no impact on the functionality but someone looking at the monitoring endpoint will find it empty.
.Monitoring spec is reset in `CephCluster` resource
Monitoring spec becomes empty whenever `ocs-operator` is restarted or during an upgrade. This has no impact on the functionality but If you are looking for the monitoring endpoint details, you will find it empty.

To resolve this issue, update the secret after upgrading from 4.7 to 4.8 so that the details of all endpoints are updated in the secret and to avoid any problem in future raised due to differences in the number of endpoints in fresh versus upgraded clusters.
Flags needinfo?(muagarwa)
Mudit Agarwal 2021-07-28 14:33:46 UTC Flags needinfo?(muagarwa) needinfo?(amohan)
arun kumar mohan 2021-07-28 14:49:28 UTC Doc Text .Monitoring spec is reset in `CephCluster` resource
Monitoring spec becomes empty whenever `ocs-operator` is restarted or during an upgrade. This has no impact on the functionality but If you are looking for the monitoring endpoint details, you will find it empty.

To resolve this issue, update the secret after upgrading from 4.7 to 4.8 so that the details of all endpoints are updated in the secret and to avoid any problem in future raised due to differences in the number of endpoints in fresh versus upgraded clusters.
Monitoring spec is reset in `CephCluster` resource

Monitoring spec becomes empty whenever `ocs-operator` is restarted or during an upgrade. This has no impact on the functionality but If you are looking for the monitoring endpoint details, you will find it empty.

To resolve this issue, update the `rook-ceph-external-cluster-details` secret after upgrading from 4.7 to 4.8 so that the details of all endpoints (ie; comma separated IP addresses of active and standby MGRs) are updated into the "MonitoringEndpoint" data key and to avoid any problem in future raised due to differences in the number of endpoints in fresh versus upgraded clusters.
Flags needinfo?(amohan)
arun kumar mohan 2021-07-29 14:54:13 UTC Link ID Github openshift/ocs-operator/pull/1285
Olive Lakra 2021-07-29 15:10:49 UTC Doc Text Monitoring spec is reset in `CephCluster` resource

Monitoring spec becomes empty whenever `ocs-operator` is restarted or during an upgrade. This has no impact on the functionality but If you are looking for the monitoring endpoint details, you will find it empty.

To resolve this issue, update the `rook-ceph-external-cluster-details` secret after upgrading from 4.7 to 4.8 so that the details of all endpoints (ie; comma separated IP addresses of active and standby MGRs) are updated into the "MonitoringEndpoint" data key and to avoid any problem in future raised due to differences in the number of endpoints in fresh versus upgraded clusters.
.Monitoring spec is reset in `CephCluster` resource

Monitoring spec becomes empty whenever `ocs-operator` is restarted or during an upgrade. This has no impact on the functionality but If you are looking for the monitoring endpoint details, you will find it empty.

To resolve this issue, update the `rook-ceph-external-cluster-details` secret after upgrading from 4.7 to 4.8 so that the details of all endpoints (such as comma-separated IP addresses of active and standby MGRs) are updated into the "MonitoringEndpoint" data key. This also helps to avoid any problems in the future raised due to differences in the number of endpoints in fresh versus upgraded clusters.
RHEL Program Management 2021-08-16 07:21:14 UTC Target Release --- OCS 4.9.0
Elad 2021-08-25 09:24:51 UTC Keywords AutomationBackLog
Mudit Agarwal 2021-09-06 08:35:13 UTC Link ID Github red-hat-storage/ocs-operator/pull/1286
Mudit Agarwal 2021-09-08 06:29:38 UTC Status POST ON_QA
Doc Type Known Issue Bug Fix
Elad 2021-09-14 16:07:25 UTC CC ebenahar
QA Contact ratamir amagrawa
Aman Agrawal 2021-09-15 08:13:05 UTC QA Contact amagrawa sagrawal
Rejy M Cyriac 2021-09-26 22:01:08 UTC Target Release OCS 4.9.0 ---
Rejy M Cyriac 2021-09-26 22:02:53 UTC Component ocs-operator ocs-operator
Product Red Hat OpenShift Container Storage Red Hat OpenShift Data Foundation
RHEL Program Management 2021-09-26 22:05:57 UTC Target Release --- ODF 4.9.0
Olive Lakra 2021-10-05 05:10:02 UTC CC olakra
Sidhant Agrawal 2021-10-19 17:53:04 UTC Status ON_QA VERIFIED
Mudit Agarwal 2021-11-03 04:17:34 UTC Blocks 2011326
Mudit Agarwal 2021-12-07 10:17:54 UTC Flags needinfo?(amohan)
Doc Text .Monitoring spec is reset in `CephCluster` resource

Monitoring spec becomes empty whenever `ocs-operator` is restarted or during an upgrade. This has no impact on the functionality but If you are looking for the monitoring endpoint details, you will find it empty.

To resolve this issue, update the `rook-ceph-external-cluster-details` secret after upgrading from 4.7 to 4.8 so that the details of all endpoints (such as comma-separated IP addresses of active and standby MGRs) are updated into the "MonitoringEndpoint" data key. This also helps to avoid any problems in the future raised due to differences in the number of endpoints in fresh versus upgraded clusters.
arun kumar mohan 2021-12-07 14:12:13 UTC Doc Text Cause: When there is an OCS upgrade, we noticed that monitoring endpoints get reset in external CephCluster's monitoring spec.

Consequence: Even though there were no serious implications and things were working after the upgrade, this was not an expected behavior.

Fix: In order to fix the issue we need to change the way monitoring endpoints were passed to the CephCluster. Previously we pass the endpoints through 'Reconciler' object which gets reset and thus it is not passed on to CephCluster's monitoring spec.
Now we changed the way endpoints are passed. Before we create CephCluster we access the endpoints directly from the JSON secret, 'rook-ceph-external-cluster-details' and update the CephCluster spec.

Result: Monitoring endpoint specs in CephCluster is updated properly with appropriate values even after OCS upgrade.
Flags needinfo?(amohan)
Kusuma 2021-12-08 18:17:02 UTC Flags needinfo?(amohan)
CC kbg
Doc Text Cause: When there is an OCS upgrade, we noticed that monitoring endpoints get reset in external CephCluster's monitoring spec.

Consequence: Even though there were no serious implications and things were working after the upgrade, this was not an expected behavior.

Fix: In order to fix the issue we need to change the way monitoring endpoints were passed to the CephCluster. Previously we pass the endpoints through 'Reconciler' object which gets reset and thus it is not passed on to CephCluster's monitoring spec.
Now we changed the way endpoints are passed. Before we create CephCluster we access the endpoints directly from the JSON secret, 'rook-ceph-external-cluster-details' and update the CephCluster spec.

Result: Monitoring endpoint specs in CephCluster is updated properly with appropriate values even after OCS upgrade.
.Monitoring spec is getting reset in CephCluster resource in external mode

Previously, when OpenShift Container Storage was upgraded, the monitoring endpoints would get reset in external CephCluster's monitoring spec. This was not an expected behavior and was due to the way monitoring endpoints were passed to the CephCluster.
With this update, the way endpoints are passed is changed. Before the CephCluster is created, the endpoints are accessed directly from the JSON secret, `rook-ceph-external-cluster-details` and the CephCluster spec is updated. As a result, the monitoring endpoint specs in the CephCluster is updated properly with appropriate values even after the OpenShift Container Storage upgrade.
errata-xmlrpc 2021-12-13 15:16:36 UTC Status VERIFIED RELEASE_PENDING
errata-xmlrpc 2021-12-13 17:44:54 UTC Resolution --- ERRATA
Status RELEASE_PENDING CLOSED
Last Closed 2021-12-13 17:44:54 UTC
errata-xmlrpc 2021-12-13 17:45:17 UTC Link ID Red Hat Product Errata RHSA-2021:5086
arun kumar mohan 2022-03-11 06:36:06 UTC Flags needinfo?(amohan)
Ramakrishnan Periyasamy 2022-08-17 10:04:11 UTC CC rperiyas
Elad 2023-08-09 17:00:43 UTC CC odf-bz-bot

Back to bug 1984735