Back to bug 1959508
| Who | When | What | Removed | Added |
|---|---|---|---|---|
| Sunil Kumar Nagaraju | 2021-05-11 17:12:19 UTC | Keywords | Automation | |
| Sunil Kumar Nagaraju | 2021-05-12 06:12:44 UTC | Target Release | 5.1 | 5.0 |
| Juan Miguel Olmo | 2021-05-12 10:02:45 UTC | Status | NEW | ASSIGNED |
| Veera Raghava Reddy | 2021-05-13 13:44:34 UTC | CC | vereddy | |
| Manasa | 2021-05-19 05:46:31 UTC | CC | mgowri | |
| Juan Miguel Olmo | 2021-05-21 12:19:23 UTC | Link ID | Ceph Project Bug Tracker 50928 | |
| Juan Miguel Olmo | 2021-05-21 12:31:30 UTC | Status | ASSIGNED | MODIFIED |
| Link ID | Github ceph/ceph/pull/41477 | |||
| Flags | needinfo?(mgowri) needinfo?(mgowri) | |||
| Sebastian Wagner | 2021-05-25 19:05:10 UTC | Target Release | 5.0 | 5.1 |
| CC | sewagner | |||
| Sebastian Wagner | 2021-05-26 11:34:05 UTC | Severity | high | low |
| Manasa | 2021-05-26 12:15:52 UTC | Flags | needinfo?(mgowri) needinfo?(mgowri) | |
| Red Hat One Jira (issues.redhat.com) | 2021-05-27 16:14:10 UTC | Link ID | Red Hat Issue Tracker RHCEPH-79 | |
| Sebastian Wagner | 2021-05-31 13:15:55 UTC | Priority | unspecified | medium |
| Preethi | 2021-06-04 05:44:32 UTC | CC | pnataraj | |
| Doc Type | If docs needed, set a value | Known Issue | ||
| Preethi | 2021-06-04 05:45:25 UTC | Blocks | 1959686 | |
| Ranjini M N | 2021-06-10 06:44:00 UTC | CC | jolmomar, rmandyam | |
| Flags | needinfo?(jolmomar) | |||
| Sunil Kumar Nagaraju | 2021-06-14 05:22:06 UTC | Flags | needinfo?(sewagner) needinfo?(jolmomar) | |
| Ranjini M N | 2021-06-15 09:23:29 UTC | Flags | needinfo?(jolmomar) | |
| Juan Miguel Olmo | 2021-06-15 10:03:40 UTC | Doc Text | Cause: When listing OSD services ( for example with "ceph orch ls" ) the total number of total OSDs that can be created for each OSD service is bigger than the real number. Consequence: It seems that OSD services are not starting the right number of OSD's , which can be understood as problems starting OSD's. Ex: # ceph orch ls ... osd.all-available-devices 12/16 4m ago 4h * ... This can drive is to think that there are 4 OSDs (16-12) not started. Workaround (if any): In order to see if there is a real problem. Just check the ceph health ("ceph -s") to see if ALL the OSD's are up and running. Result: The list of services will report the wrong total number of OSD's. The "ceph health" command will provide the correct information. | |
| Flags | needinfo?(jolmomar) needinfo?(sewagner) needinfo?(jolmomar) needinfo?(jolmomar) | |||
| Ranjini M N | 2021-06-15 16:07:35 UTC | Docs Contact | knortema | rmandyam |
| Doc Text | Cause: When listing OSD services ( for example with "ceph orch ls" ) the total number of total OSDs that can be created for each OSD service is bigger than the real number. Consequence: It seems that OSD services are not starting the right number of OSD's , which can be understood as problems starting OSD's. Ex: # ceph orch ls ... osd.all-available-devices 12/16 4m ago 4h * ... This can drive is to think that there are 4 OSDs (16-12) not started. Workaround (if any): In order to see if there is a real problem. Just check the ceph health ("ceph -s") to see if ALL the OSD's are up and running. Result: The list of services will report the wrong total number of OSD's. The "ceph health" command will provide the correct information. | .The `ceph orch ls` command does not list the correct number of OSDs that can be created in the {storage-product} cluster The command `ceph orch ls` gives the following output: .Example ----- # ceph orch ls osd.all-available-devices 12/16 4m ago 4h * ----- As per the above output, four OSDs have not started which is not correct. To workaround this issue, run the `ceph -s` command to see if all the OSDs are up and running in a {storage-product} cluster. | ||
| Flags | needinfo?(jolmomar) | |||
| Juan Miguel Olmo | 2021-06-21 07:22:16 UTC | Flags | needinfo?(jolmomar) | |
| Sebastian Wagner | 2021-09-28 14:01:04 UTC | Status | MODIFIED | ASSIGNED |
| Assignee | jolmomar | gabrioux | ||
| Guillaume Abrioux | 2021-10-13 05:46:20 UTC | Link ID | Github ceph/ceph/pull/43253 | |
| Status | ASSIGNED | POST | ||
| Guillaume Abrioux | 2021-10-13 05:46:36 UTC | Link ID | Github ceph/ceph/pull/41477 | |
| Ken Dreyer (Red Hat) | 2021-10-19 19:21:53 UTC | CC | kdreyer | |
| errata-xmlrpc | 2021-11-27 05:06:03 UTC | CC | tserlin | |
| Status | POST | MODIFIED | ||
| Fixed In Version | ceph-16.2.6-42.el8cp | |||
| Status | MODIFIED | ON_QA | ||
| Rahul Lepakshi | 2021-12-13 06:14:15 UTC | CC | rlepaksh | |
| Rahul Lepakshi | 2021-12-17 05:01:22 UTC | QA Contact | vashastr | rlepaksh |
| Sunil Kumar Nagaraju | 2021-12-20 11:01:29 UTC | Status | ON_QA | ASSIGNED |
| Sebastian Wagner | 2021-12-20 15:09:57 UTC | Assignee | gabrioux | sewagner |
| Link ID | Github ceph/ceph/pull/44367 | |||
| Sunil Kumar Nagaraju | 2021-12-22 03:09:13 UTC | QA Contact | rlepaksh | sunnagar |
| Aron Gunn | 2022-01-18 21:32:14 UTC | Blocks | 2031073 | |
| CC | agunn | |||
| Red Hat Bugzilla | 2022-01-31 23:32:10 UTC | CC | sewagner | |
| Assignee | sewagner | adking | ||
| Ranjini M N | 2022-02-16 07:45:35 UTC | CC | adking | |
| Flags | needinfo?(adking) | |||
| Adam King | 2022-02-23 21:57:04 UTC | Doc Text | .The `ceph orch ls` command does not list the correct number of OSDs that can be created in the {storage-product} cluster The command `ceph orch ls` gives the following output: .Example ----- # ceph orch ls osd.all-available-devices 12/16 4m ago 4h * ----- As per the above output, four OSDs have not started which is not correct. To workaround this issue, run the `ceph -s` command to see if all the OSDs are up and running in a {storage-product} cluster. | In RHCS 5.0, "ceph orch ls" could show an incorrect size for an osd service. For example, an osd service with 5 osds might show something like "5/8" even though there was never intended to be 8 osds attached to this service In RHCS 5.1 the size for osds now simply displays the number of osds found for the service and no longer guesses how many there should be total. If cephadm sees, for example, 6 osd daemons for the service it will simply say there are 6 osd daemons. |
| Flags | needinfo?(adking) | |||
| Doc Type | Known Issue | Bug Fix | ||
| Adam King | 2022-02-23 21:57:37 UTC | Status | ASSIGNED | POST |
| Ranjini M N | 2022-02-24 05:50:39 UTC | Fixed In Version | ceph-16.2.6-42.el8cp | ceph-16.2.7-71.el8cp |
| Status | POST | MODIFIED | ||
| Status | MODIFIED | ON_QA | ||
| Doc Text | In RHCS 5.0, "ceph orch ls" could show an incorrect size for an osd service. For example, an osd service with 5 osds might show something like "5/8" even though there was never intended to be 8 osds attached to this service In RHCS 5.1 the size for osds now simply displays the number of osds found for the service and no longer guesses how many there should be total. If cephadm sees, for example, 6 osd daemons for the service it will simply say there are 6 osd daemons. | .The `ceph orch ls` command now displays the correct size of the Ceph OSDs service Previously, `ceph orch ls` command would show an incorrect size for Ceph OSD service. For example, an OSD service with five OSDs might show something like `5/8` although eight OSDs were not intended for this service. With this release, the size displays the number of OSDs found for this service. For example, if there are six OSDs for a service, then `cephadm` displays six daemons. | ||
| Flags | needinfo?(adking) | |||
| Adam King | 2022-02-24 06:27:22 UTC | Flags | needinfo?(adking) | |
| Sunil Kumar Nagaraju | 2022-03-01 06:20:35 UTC | Status | ON_QA | ASSIGNED |
| Flags | needinfo?(adking) | |||
| Adam King | 2022-03-01 13:49:17 UTC | Flags | needinfo?(adking) | |
| Sunil Kumar Nagaraju | 2022-03-02 07:20:38 UTC | Status | ASSIGNED | ON_QA |
| Sunil Kumar Nagaraju | 2022-03-02 07:21:16 UTC | Status | ON_QA | VERIFIED |
| errata-xmlrpc | 2022-04-04 08:01:05 UTC | CC | nravinas | |
| Status | VERIFIED | RELEASE_PENDING | ||
| errata-xmlrpc | 2022-04-04 10:20:39 UTC | Resolution | --- | ERRATA |
| Status | RELEASE_PENDING | CLOSED | ||
| Last Closed | 2022-04-04 10:20:39 UTC | |||
| errata-xmlrpc | 2022-04-04 10:21:04 UTC | Link ID | Red Hat Product Errata RHSA-2022:1174 |
Back to bug 1959508