Back to bug 1896810

Who When What Removed Added
Yaniv Kaul 2020-11-12 09:27:27 UTC CC ratamir
Flags needinfo?(ratamir)
Neha Ojha 2020-11-13 23:30:09 UTC CC nojha
Summary After running the test 'test_recovery_from_volume_deletion' we get warning "1 daemons have recently crashed" os/bluestore/KernelDevice.cc: ceph_abort_msg(Unexpected IO error)
Elad 2020-11-15 13:25:57 UTC CC ebenahar
Flags needinfo?(ratamir)
Yaniv Kaul 2020-11-15 15:47:14 UTC Flags needinfo?(nojha)
Scott Ostapovicz 2020-11-16 13:05:13 UTC Assignee sostapov nojha
Neha Ojha 2020-11-16 19:44:43 UTC Flags needinfo?(nojha)
Yaniv Kaul 2020-11-22 12:56:37 UTC Flags needinfo?(ikave)
Elad 2020-11-23 16:32:34 UTC CC sdudhgao
Component ceph rook
Assignee nojha tnielsen
QA Contact ratamir ebenahar
Flags needinfo?(sdudhgao)
Sébastien Han 2020-11-26 09:46:31 UTC CC shan
Assignee tnielsen rohgupta
Servesha 2020-12-03 10:46:14 UTC Flags needinfo?(sdudhgao)
Neha Berry 2020-12-04 11:22:05 UTC CC nberry
Flags needinfo?(sdudhgao)
Servesha 2020-12-04 11:39:09 UTC Flags needinfo?(sdudhgao)
Mudit Agarwal 2020-12-04 11:46:29 UTC Blocks 1882359
Mudit Agarwal 2020-12-04 11:47:22 UTC CC muagarwa
Flags needinfo?(sdudhgao)
Servesha 2020-12-04 13:09:36 UTC Doc Text Cause: This issue can be seen after performing the device replacement procedure.

Consequence: After disk replacement, a warning "1 daemons have recently crashed" can be seen even if all OSD pods are up and running. This warning causes a change in ceph's status. The ceph status should be HEALTH_OK instead of HEALTH_WARN.

Workaround (if any): rsh to the ceph-tools pod and silence the warning, the ceph health will be back to HEALTH_OK.

Result: The ceph health will change from HEALTH_WARN to HEALTH_OK.
Doc Type If docs needed, set a value Known Issue
Servesha 2020-12-04 13:10:26 UTC Flags needinfo?(sdudhgao)
Erin Donnelly 2020-12-06 21:00:25 UTC CC edonnell
Doc Text Cause: This issue can be seen after performing the device replacement procedure.

Consequence: After disk replacement, a warning "1 daemons have recently crashed" can be seen even if all OSD pods are up and running. This warning causes a change in ceph's status. The ceph status should be HEALTH_OK instead of HEALTH_WARN.

Workaround (if any): rsh to the ceph-tools pod and silence the warning, the ceph health will be back to HEALTH_OK.

Result: The ceph health will change from HEALTH_WARN to HEALTH_OK.
.Ceph status is `HEALTH_WARN` after disk replacement

After disk replacement, a warning `1 daemons have recently crashed` is seen even if all OSD pods are up and running. This warning causes a change in Ceph's status. The Ceph status should be `HEALTH_OK` instead of `HEALTH_WARN`. To workaround this issue, `rsh` to the `ceph-tools` pod and silence the warning, the Ceph health will then be back to `HEALTH_OK`.
Yaniv Kaul 2020-12-07 07:22:46 UTC Flags needinfo?(sdudhgao)
Servesha 2020-12-07 08:41:07 UTC Flags needinfo?(sdudhgao)
Pulkit Kundra 2021-01-07 08:18:15 UTC CC pkundra
Assignee rohgupta pkundra
Mudit Agarwal 2021-01-13 10:12:30 UTC Status NEW ASSIGNED
Flags needinfo?(pkundra)
Pulkit Kundra 2021-01-18 04:29:55 UTC Summary os/bluestore/KernelDevice.cc: ceph_abort_msg(Unexpected IO error) Silence crash warning in osd removal job.
Pulkit Kundra 2021-01-18 05:16:48 UTC Flags needinfo?(pkundra)
Pulkit Kundra 2021-01-18 12:43:40 UTC Status ASSIGNED POST
Link ID Github rook/rook/pull/7001
Neha Berry 2021-02-02 05:59:08 UTC QA Contact ebenahar ikave
RHEL Program Management 2021-02-02 05:59:16 UTC Target Release --- OCS 4.7.0
Sébastien Han 2021-02-18 16:38:03 UTC Flags needinfo?(pkundra)
Pulkit Kundra 2021-02-19 05:34:04 UTC Link ID Github openshift/rook/pull/175
Flags needinfo?(pkundra)
OpenShift BugZilla Robot 2021-02-19 08:17:10 UTC Status POST MODIFIED
Mudit Agarwal 2021-02-24 17:06:52 UTC Status MODIFIED ON_QA
Fixed In Version 4.7.0-272.ci
Itzhak 2021-03-03 17:03:18 UTC Status ON_QA ASSIGNED
Itzhak 2021-03-04 10:28:47 UTC Flags needinfo?(ikave)
Sébastien Han 2021-03-08 08:54:42 UTC Flags needinfo?(muagarwa)
Mudit Agarwal 2021-03-08 09:00:08 UTC Flags needinfo?(muagarwa) needinfo?(ratamir)
Pulkit Kundra 2021-03-12 12:54:23 UTC Flags needinfo?(ikave)
Travis Nielsen 2021-03-15 17:12:02 UTC CC tnielsen
Blaine Gardner 2021-03-15 17:23:09 UTC CC brgardne
Itzhak 2021-03-16 15:30:19 UTC Flags needinfo?(ikave)
Orit Wasserman 2021-04-04 09:14:26 UTC CC owasserm
Travis Nielsen 2021-05-11 14:51:05 UTC Flags needinfo?(pkundra)
Travis Nielsen 2021-05-17 15:45:17 UTC Flags needinfo?(pkundra)
Pulkit Kundra 2021-05-18 14:06:28 UTC Flags needinfo?(pkundra) needinfo?(pkundra)
Sébastien Han 2021-05-24 15:45:29 UTC Component rook ceph
Assignee pkundra sostapov
QA Contact ikave ratamir
Elad 2021-06-01 08:49:47 UTC Keywords AutomationBackLog
Pulkit Kundra 2021-06-02 14:19:30 UTC Blocks 1967164
Mudit Agarwal 2021-06-02 14:40:21 UTC Summary Silence crash warning in osd removal job. [Tracker for BZ #1967164] Silence crash warning in osd removal job.
Red Hat Bugzilla 2021-07-09 19:05:04 UTC CC pkundra
Mudit Agarwal 2021-08-20 02:14:00 UTC Target Release OCS 4.7.0 ---
Rejy M Cyriac 2021-09-26 22:42:39 UTC Product Red Hat OpenShift Container Storage Red Hat OpenShift Data Foundation
Component ceph ceph
Oded 2021-11-29 13:36:12 UTC CC oviner
Rejy M Cyriac 2022-01-07 15:09:57 UTC QA Contact ratamir ebenahar
Red Hat Bugzilla 2022-01-10 10:25:19 UTC CC ratamir
Mudit Agarwal 2022-01-26 11:32:05 UTC Flags needinfo?(ratamir)
Brendan Conoboy 2022-07-13 15:11:43 UTC CC pnataraj
Sub Component RBD
Vasishta 2022-07-14 12:50:04 UTC Sub Component RBD Ceph-MGR
CC pdhiran, vashastr
Assignee sostapov nojha
Red Hat Bugzilla 2022-12-31 19:24:04 UTC CC pnataraj
Red Hat Bugzilla 2022-12-31 19:32:36 UTC CC pdhiran
Red Hat Bugzilla 2022-12-31 19:54:38 UTC CC nberry
Red Hat Bugzilla 2022-12-31 22:33:08 UTC CC oviner
Red Hat Bugzilla 2022-12-31 22:33:47 UTC CC owasserm
Red Hat Bugzilla 2022-12-31 22:37:15 UTC CC ebenahar
QA Contact ebenahar
Red Hat Bugzilla 2023-01-01 05:30:54 UTC CC edonnell
Red Hat Bugzilla 2023-01-01 06:02:21 UTC CC bniver
Red Hat Bugzilla 2023-01-01 07:22:30 UTC CC brgardne
Red Hat Bugzilla 2023-01-01 07:23:01 UTC CC tnielsen
Red Hat Bugzilla 2023-01-01 08:38:30 UTC CC nojha
Assignee nojha nobody
Red Hat Bugzilla 2023-01-01 08:45:22 UTC CC vashastr
Alasdair Kergon 2023-01-04 04:47:59 UTC Assignee nobody nojha
Alasdair Kergon 2023-01-04 04:49:43 UTC CC brgardne
Alasdair Kergon 2023-01-04 04:53:17 UTC CC edonnell
Alasdair Kergon 2023-01-04 05:12:15 UTC QA Contact ebenahar
Alasdair Kergon 2023-01-04 05:18:56 UTC CC nberry
Alasdair Kergon 2023-01-04 05:21:38 UTC CC nojha
Alasdair Kergon 2023-01-04 05:26:40 UTC CC oviner
Alasdair Kergon 2023-01-04 05:26:53 UTC CC owasserm
Alasdair Kergon 2023-01-04 05:30:13 UTC CC pdhiran
Alasdair Kergon 2023-01-04 05:32:18 UTC CC pnataraj
Alasdair Kergon 2023-01-04 05:49:38 UTC CC tnielsen
Alasdair Kergon 2023-01-04 05:53:49 UTC CC vashastr
Alasdair Kergon 2023-01-04 06:11:25 UTC CC bniver
Alasdair Kergon 2023-01-04 06:41:59 UTC CC ebenahar
Red Hat Bugzilla 2023-01-31 23:38:30 UTC CC madam
Aman Agrawal 2023-02-20 12:18:43 UTC CC amagrawa
Elad 2023-02-22 08:14:55 UTC Flags needinfo?(nojha)
Sunil Kumar Acharya 2023-03-19 17:58:34 UTC Flags needinfo?(nojha)
Radoslaw Zarzynski 2023-04-05 19:03:06 UTC CC rzarzyns
Flags needinfo?(nojha) needinfo?(nojha)
Red Hat Bugzilla 2023-07-31 21:50:27 UTC CC vashastr
Red Hat Bugzilla 2023-08-03 08:30:41 UTC CC ocs-bugs
Elad 2023-08-09 16:37:41 UTC CC odf-bz-bot

Back to bug 1896810