Back to bug 1761474
| Who | When | What | Removed | Added |
|---|---|---|---|---|
| Raz Tamir | 2019-10-14 13:36:15 UTC | CC | ratamir | |
| Dependent Products | Red Hat OpenShift Container Storage | |||
| Yaniv Kaul | 2019-10-14 13:44:56 UTC | Flags | needinfo?(adeza) | |
| Alfredo Deza | 2019-10-14 14:45:56 UTC | Flags | needinfo?(adeza) | |
| Alfredo Deza | 2019-10-14 14:50:37 UTC | Severity | unspecified | high |
| Boris Ranto | 2019-10-14 16:07:44 UTC | CC | dzafman, kchai, nojha | |
| Component | Ceph-Mgr Plugins | RADOS | ||
| Assignee | branto | nojha | ||
| QA Contact | mkasturi | mmurthy | ||
| Yaniv Kaul | 2020-01-08 14:42:25 UTC | Flags | needinfo?(adeza) | |
| Alfredo Deza | 2020-01-08 15:08:29 UTC | Flags | needinfo?(adeza) | |
| Josh Durgin | 2020-01-08 15:40:07 UTC | Target Release | 4.* | 4.1 |
| CC | jdurgin | |||
| Neha Ojha | 2020-01-31 19:48:32 UTC | Status | NEW | ASSIGNED |
| Neha Ojha | 2020-02-06 22:39:03 UTC | Priority | unspecified | medium |
| Ken Dreyer (Red Hat) | 2020-02-24 23:03:28 UTC | CC | kdreyer | |
| Josh Durgin | 2020-03-23 20:45:40 UTC | Status | ASSIGNED | MODIFIED |
| Hemanth Kumar | 2020-03-24 16:03:02 UTC | Flags | needinfo?(ceph-qe-bugs) | |
| CC | tserlin | |||
| CC | hyelloji | |||
| Flags | needinfo?(ceph-qe-bugs) | needinfo- | ||
| errata-xmlrpc | 2020-03-24 18:08:40 UTC | Fixed In Version | ceph-14.2.8-3.el8, ceph-14.2.8-3.el7 | |
| Status | MODIFIED | ON_QA | ||
| Karen Norteman | 2020-03-27 15:33:07 UTC | CC | knortema | |
| Doc Type | If docs needed, set a value | Bug Fix | ||
| Karen Norteman | 2020-03-31 20:13:00 UTC | Blocks | 1816167 | |
| Manohar Murthy | 2020-04-14 11:45:08 UTC | Flags | needinfo?(jdurgin) | |
| Josh Durgin | 2020-04-14 16:32:03 UTC | Flags | needinfo?(jdurgin) | |
| Karen Norteman | 2020-04-14 17:55:58 UTC | Flags | needinfo?(nojha) | |
| Manohar Murthy | 2020-04-15 17:05:50 UTC | Status | ON_QA | VERIFIED |
| Neha Ojha | 2020-04-16 17:48:23 UTC | Doc Text | This change will raise a health warning if a Ceph cluster is set up with no managers or if all the managers go down. This is because Ceph now heavily depends on the manager to deliver key features and it is not advisable to run a Ceph cluster without managers. | |
| Flags | needinfo?(nojha) | |||
| Aron Gunn | 2020-04-21 18:28:36 UTC | CC | agunn | |
| Docs Contact | agunn | |||
| Aron Gunn | 2020-04-21 21:30:08 UTC | Doc Text | This change will raise a health warning if a Ceph cluster is set up with no managers or if all the managers go down. This is because Ceph now heavily depends on the manager to deliver key features and it is not advisable to run a Ceph cluster without managers. | .A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In previous {storage-product} releases, the storage cluster status was `HEALTH_OK` even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers or if all the Ceph Managers go down. Because {storage-product} heavily relies on the Ceph Manager to deliver key features and it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs. |
| Aron Gunn | 2020-04-21 21:32:23 UTC | Doc Text | .A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In previous {storage-product} releases, the storage cluster status was `HEALTH_OK` even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers or if all the Ceph Managers go down. Because {storage-product} heavily relies on the Ceph Manager to deliver key features and it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs. | .A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In previous {storage-product} releases, the storage cluster health status was `HEALTH_OK` even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers or if all the Ceph Managers go down. Because {storage-product} heavily relies on the Ceph Manager to deliver key features and it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs. |
| Karen Norteman | 2020-04-27 14:57:30 UTC | Doc Text | .A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In previous {storage-product} releases, the storage cluster health status was `HEALTH_OK` even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers or if all the Ceph Managers go down. Because {storage-product} heavily relies on the Ceph Manager to deliver key features and it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs. | .A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In previous {storage-product} releases, the storage cluster health status was `HEALTH_OK` even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers, or if all the Ceph Managers go down. Because {storage-product} heavily relies on the Ceph Manager to deliver key features, it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs. |
| errata-xmlrpc | 2020-05-19 15:11:36 UTC | Status | VERIFIED | RELEASE_PENDING |
| errata-xmlrpc | 2020-05-19 17:31:11 UTC | Status | RELEASE_PENDING | CLOSED |
| Resolution | --- | ERRATA | ||
| Last Closed | 2020-05-19 17:31:11 UTC | |||
| errata-xmlrpc | 2020-05-19 17:31:30 UTC | Link ID | Red Hat Product Errata RHSA-2020:2231 |
Back to bug 1761474