Back to bug 1810525
| Who | When | What | Removed | Added |
|---|---|---|---|---|
| Elad | 2020-03-05 15:07:32 UTC | CC | ebenahar | |
| Summary | [GSS][RFE] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold. | [RFE] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold. | ||
| Michael Adam | 2020-03-05 16:47:50 UTC | Summary | [RFE] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold. | [GSS][RFE] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold. |
| Flags | needinfo?(ebenahar) | |||
| Elad | 2020-03-16 11:16:36 UTC | Flags | needinfo?(ebenahar) | |
| Michael Adam | 2020-05-04 07:42:42 UTC | CC | madam | |
| Component | unclassified | ceph | ||
| Assignee | madam | bniver | ||
| Flags | needinfo?(bniver) | |||
| Josh Durgin | 2020-05-06 22:02:00 UTC | CC | jdurgin | |
| Component | ceph | ocs-operator | ||
| Assignee | bniver | jrivera | ||
| Flags | needinfo?(bniver) | |||
| Josh Durgin | 2020-05-06 22:03:53 UTC | Flags | needinfo?(assingh) | |
| Ashish Singh | 2020-05-07 18:42:15 UTC | Flags | needinfo?(assingh) | |
| Bipin Kunal | 2020-05-29 06:57:14 UTC | CC | bkunal | |
| Blocks | 1841426 | |||
| Mudit Agarwal | 2020-09-28 05:36:49 UTC | CC | muagarwa | |
| Orit Wasserman | 2020-12-23 14:34:08 UTC | CC | owasserm | |
| Component | ocs-operator | csi-driver | ||
| Assignee | jrivera | hchiramm | ||
| QA Contact | ratamir | ebenahar | ||
| Madhu Rajanna | 2020-12-23 14:39:17 UTC | CC | mrajanna | |
| Niels de Vos | 2021-01-15 09:21:37 UTC | Depends On | 1897351 | |
| Humble Chirammal | 2021-05-05 06:17:09 UTC | Summary | [GSS][RFE] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold. | [GSS][RFE] [Tracker for Ceph # BZ # 1910272] Deletion of data is not allowed after the Ceph cluster reaches osd-full-ratio threshold. |
| Humble Chirammal | 2021-08-02 06:47:24 UTC | Status | NEW | ASSIGNED |
| Niels de Vos | 2021-08-05 11:37:01 UTC | CC | ndevos | |
| Keywords | Tracking | |||
| Mudit Agarwal | 2021-08-12 01:16:07 UTC | Assignee | hchiramm | khiremat |
| Humble Chirammal | 2021-08-12 11:09:28 UTC | CC | hchiramm | |
| Status | ASSIGNED | NEW | ||
| Mudit Agarwal | 2021-09-21 11:28:47 UTC | Component | csi-driver | ceph |
| QA Contact | ebenahar | ratamir | ||
| krishnaram Karthick | 2021-09-22 11:29:03 UTC | CC | kramdoss | |
| Rejy M Cyriac | 2021-09-26 17:30:30 UTC | Product | Red Hat OpenShift Container Storage | Red Hat OpenShift Data Foundation |
| Component | ceph | ceph | ||
| Rejy M Cyriac | 2021-09-26 22:53:03 UTC | CC | rcyriac | |
| Mudit Agarwal | 2021-09-28 13:01:43 UTC | Status | NEW | POST |
| Mudit Agarwal | 2021-09-30 10:50:56 UTC | Status | POST | MODIFIED |
| Eran Tamir | 2021-09-30 11:05:15 UTC | CC | etamir | |
| Elad | 2021-10-04 11:16:38 UTC | QA Contact | ratamir | asandler |
| RHEL Program Management | 2021-10-04 11:16:47 UTC | Target Release | --- | ODF 4.9.0 |
| Mudit Agarwal | 2021-10-08 13:25:16 UTC | Fixed In Version | v4.9.0-182.ci | |
| Status | MODIFIED | ON_QA | ||
| Mudit Agarwal | 2021-10-11 16:27:53 UTC | Group | redhat | |
| Mudit Agarwal | 2021-11-03 03:43:32 UTC | Doc Text | .Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an `ENOSPACE` error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager and `ceph-volume` plugin. With this release, the new FULL feature is introduced. This feature gives the Ceph Manager FULL capability, and bypasses the Ceph OSD full check. Additionally, the `client_check_pool_permission` option can be disabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. |
|
| Mudit Agarwal | 2021-11-03 04:17:34 UTC | Blocks | 2011326 | |
| Kusuma | 2021-11-25 14:37:09 UTC | Flags | needinfo?(khiremat) needinfo?(muagarwa) | |
| CC | kbg, khiremat | |||
| Doc Text | .Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an `ENOSPACE` error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager and `ceph-volume` plugin. With this release, the new FULL feature is introduced. This feature gives the Ceph Manager FULL capability, and bypasses the Ceph OSD full check. Additionally, the `client_check_pool_permission` option can be disabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. | .Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph manager would hang when checking pool permissions. The Ceph Metadata Server (MDS) did not allow `write` operations when the Ceph OSD was full. Also, it was not possible to delete data to free up space using the Ceph Manager and `ceph-volume` plugin. With this release, a new FULL feature is introduced, which gives the Ceph Manager the capability to bypass the Ceph OSD full check. With the Ceph Manager having FULL capabilities, MDS no longer blocks Ceph Manager calls. This enables the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. Also, you have the option to disable the `client_check_pool_permission` option. |
||
| Mudit Agarwal | 2021-11-30 02:50:03 UTC | Flags | needinfo?(muagarwa) | |
| Kotresh HR | 2021-12-01 10:46:19 UTC | Flags | needinfo?(khiremat) | |
| Kusuma | 2021-12-01 12:49:54 UTC | Doc Text | .Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph manager would hang when checking pool permissions. The Ceph Metadata Server (MDS) did not allow `write` operations when the Ceph OSD was full. Also, it was not possible to delete data to free up space using the Ceph Manager and `ceph-volume` plugin. With this release, a new FULL feature is introduced, which gives the Ceph Manager the capability to bypass the Ceph OSD full check. With the Ceph Manager having FULL capabilities, MDS no longer blocks Ceph Manager calls. This enables the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. Also, you have the option to disable the `client_check_pool_permission` option. | .Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an `ENOSPACE` error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager volume plugin. With this release, the new FULL capability is introduced. With the FULL capability, the Ceph Manager bypasses the Ceph OSD full check. The `client_check_pool_permission` option is disabled by default whereas, in previous releases, it was enabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. |
| Anna Sandler | 2021-12-02 18:29:17 UTC | Status | ON_QA | VERIFIED |
| errata-xmlrpc | 2021-12-13 15:17:25 UTC | Status | VERIFIED | RELEASE_PENDING |
| errata-xmlrpc | 2021-12-13 17:44:23 UTC | Status | RELEASE_PENDING | CLOSED |
| Resolution | --- | ERRATA | ||
| Last Closed | 2021-12-13 17:44:23 UTC | |||
| errata-xmlrpc | 2021-12-13 17:44:44 UTC | Link ID | Red Hat Product Errata RHSA-2021:5086 | |
| Elad | 2023-08-09 16:37:41 UTC | CC | odf-bz-bot |
Back to bug 1810525