Back to bug 2108489

Who When What Removed Added
Red Hat One Jira (issues.redhat.com) 2022-07-19 08:18:54 UTC Group redhat
Link ID Red Hat Issue Tracker RHCEPH-4812
Patrick Donnelly 2022-07-20 00:31:45 UTC Assignee vshankar nojha
QA Contact hyelloji pdhiran
CC akupczyk, amathuri, bhubbard, choffman, ksirivad, lflores, nojha, pdhange, pdonnell, rfriedma, rzarzyns, sseshasa, vumrao
Component CephFS RADOS
Geo Jose 2022-07-21 06:17:04 UTC CC gjose
Brad Hubbard 2022-07-25 23:35:09 UTC Flags needinfo?(nojha)
Flags needinfo?(nojha)
Brad Hubbard 2022-07-25 23:52:18 UTC CC adking
Flags needinfo?(adking)
Adam King 2022-07-26 20:34:31 UTC Flags needinfo?(adking)
Brad Hubbard 2022-08-02 22:02:20 UTC Flags needinfo?(bhubbard)
Component RADOS Cephadm
Docs Contact asriram
QA Contact pdhiran mgowri
Assignee nojha adking
Flags needinfo?(bhubbard)
Adam King 2022-08-08 12:24:11 UTC Flags needinfo?(adking)
Flags needinfo?(adking) needinfo?(nravinas)
Adam King 2022-09-26 18:41:03 UTC Flags needinfo?(nravinas)
Flags needinfo?(adking)
Flags needinfo?(adking)
Flags needinfo?(mhackett)
CC mhackett
Status NEW ASSIGNED
Flags needinfo?(adking) needinfo?(adking)
Adam King 2022-09-26 18:42:26 UTC Flags needinfo?(mhackett)
Veera Raghava Reddy 2022-12-18 05:37:00 UTC QA Contact mgowri vdas
Red Hat Bugzilla 2022-12-31 19:13:17 UTC CC amathuri
Red Hat Bugzilla 2022-12-31 19:59:52 UTC CC sseshasa
Red Hat Bugzilla 2022-12-31 22:43:25 UTC CC rfriedma
Red Hat Bugzilla 2022-12-31 23:43:25 UTC CC rzarzyns
Red Hat Bugzilla 2022-12-31 23:45:44 UTC CC akupczyk
Red Hat Bugzilla 2023-01-01 05:35:10 UTC CC ksirivad
Red Hat Bugzilla 2023-01-01 05:37:21 UTC Assignee adking nobody
CC adking
Red Hat Bugzilla 2023-01-01 05:40:03 UTC CC pdonnell
Red Hat Bugzilla 2023-01-01 05:52:39 UTC CC mhackett
Red Hat Bugzilla 2023-01-01 06:26:59 UTC CC lflores
Red Hat Bugzilla 2023-01-01 06:28:51 UTC CC choffman
Red Hat Bugzilla 2023-01-01 08:29:28 UTC Docs Contact asriram
Red Hat Bugzilla 2023-01-01 08:38:46 UTC CC nojha
Red Hat Bugzilla 2023-01-01 08:39:20 UTC CC pdhange
Red Hat Bugzilla 2023-01-01 08:46:12 UTC QA Contact vdas
Red Hat Bugzilla 2023-01-01 08:50:03 UTC CC vumrao
Alasdair Kergon 2023-01-04 04:33:20 UTC Assignee nobody adking
Alasdair Kergon 2023-01-04 04:38:28 UTC CC adking
Alasdair Kergon 2023-01-04 04:40:45 UTC CC akupczyk
Alasdair Kergon 2023-01-04 04:43:34 UTC CC amathuri
Alasdair Kergon 2023-01-04 04:52:03 UTC QA Contact vdas
Alasdair Kergon 2023-01-04 05:08:58 UTC CC ksirivad
Alasdair Kergon 2023-01-04 05:10:58 UTC CC lflores
Alasdair Kergon 2023-01-04 05:21:38 UTC CC nojha
Alasdair Kergon 2023-01-04 05:28:18 UTC CC pdhange
Alasdair Kergon 2023-01-04 05:31:22 UTC CC pdonnell
Alasdair Kergon 2023-01-04 05:34:52 UTC CC rfriedma
Alasdair Kergon 2023-01-04 05:37:37 UTC CC rzarzyns
Alasdair Kergon 2023-01-04 05:59:30 UTC CC vumrao
Alasdair Kergon 2023-01-04 06:13:47 UTC CC choffman
Alasdair Kergon 2023-01-04 06:56:31 UTC CC sseshasa
Alasdair Kergon 2023-01-04 11:29:24 UTC CC mhackett
Red Hat Bugzilla 2023-01-09 08:31:04 UTC CC ceph-eng-bugs
Alasdair Kergon 2023-01-09 19:43:36 UTC CC ceph-eng-bugs
Adam King 2023-03-22 12:42:42 UTC Flags needinfo?(adking)
Flags needinfo?(adking)
Adam King 2023-03-31 20:00:44 UTC Depends On 2180567
Ken Dreyer (Red Hat) 2023-04-12 13:55:13 UTC Fixed In Version ceph-17.2.6-5.el9cp
Status ASSIGNED MODIFIED
errata-xmlrpc 2023-04-12 13:58:54 UTC Status MODIFIED ON_QA
Manisha Saini 2023-04-12 19:32:56 UTC CC msaini
QA Contact vdas vpapnoi
Vinayak Papnoi 2023-04-25 08:33:12 UTC Status ON_QA VERIFIED
Akash Raj 2023-05-03 06:19:09 UTC Flags needinfo?(adking)
Docs Contact akraj
CC akraj
Adam King 2023-05-03 14:49:29 UTC Flags needinfo?(adking)
Doc Type If docs needed, set a value Bug Fix
Doc Text Cause: A check for whether the current active mgr was running the upgraded version of cephadm wasn't working, and sometimes mgrs that had already been upgraded would think they were on an older version.

Consequence: It was possible for the upgrade to get into a state where all the mgr daemons are upgraded, but they all believe they still need to be redeployed by a mgr using the upgraded version of cephadm, causing it to repeatedly just failover between the mgr daemons.

Fix: The aforementioned check has been fixed, so now mgr daemons will know they are on the correct version and can redeploy other daemons using the upgraded version of cephadm.

Result: Upgrades should no longer get stuck just failing over between mgr daemons
Akash Raj 2023-05-08 02:34:57 UTC Blocks 2192813
Akash Raj 2023-05-09 02:59:47 UTC Flags needinfo?(adking)
Doc Text Cause: A check for whether the current active mgr was running the upgraded version of cephadm wasn't working, and sometimes mgrs that had already been upgraded would think they were on an older version.

Consequence: It was possible for the upgrade to get into a state where all the mgr daemons are upgraded, but they all believe they still need to be redeployed by a mgr using the upgraded version of cephadm, causing it to repeatedly just failover between the mgr daemons.

Fix: The aforementioned check has been fixed, so now mgr daemons will know they are on the correct version and can redeploy other daemons using the upgraded version of cephadm.

Result: Upgrades should no longer get stuck just failing over between mgr daemons
.The manager daemons correctly identify their version and no longer failover
Previously, a check for whether the current active manager was running the upgraded version of `cephadm` would not work, and sometimes, managers that were upgraded would think that they were on an older version. Due to this, the upgrade would get into a state where all the mgr daemons were upgraded, but they were still in line to be redeployed by a manager using the upgraded version of `cephadm`, causing it to repeatedly just failover between the manager daemons.

With this fix, the check is rectified and now, the manager daemons are aware of their correct version, and can redeploy other daemons using the upgraded version of `cephadm`.
Adam King 2023-05-11 13:46:53 UTC Doc Text .The manager daemons correctly identify their version and no longer failover
Previously, a check for whether the current active manager was running the upgraded version of `cephadm` would not work, and sometimes, managers that were upgraded would think that they were on an older version. Due to this, the upgrade would get into a state where all the mgr daemons were upgraded, but they were still in line to be redeployed by a manager using the upgraded version of `cephadm`, causing it to repeatedly just failover between the manager daemons.

With this fix, the check is rectified and now, the manager daemons are aware of their correct version, and can redeploy other daemons using the upgraded version of `cephadm`.
.The manager daemons correctly identify they have been upgraded and no longer failover
Previously, a check for whether the current active manager was running the upgraded version of `cephadm` would not work, and sometimes, managers that were upgraded would think that they were on an older version. Due to this, the upgrade would get into a state where all the mgr daemons were upgraded, but they were still in line to be redeployed by a manager using the upgraded version of `cephadm`, causing it to repeatedly just failover between the manager daemons.

With this fix, the check is rectified and now, the manager daemons are aware of their correct version, and can redeploy other daemons using the upgraded version of `cephadm`.
Flags needinfo?(adking)
Akash Raj 2023-05-19 12:06:38 UTC Doc Text .The manager daemons correctly identify they have been upgraded and no longer failover
Previously, a check for whether the current active manager was running the upgraded version of `cephadm` would not work, and sometimes, managers that were upgraded would think that they were on an older version. Due to this, the upgrade would get into a state where all the mgr daemons were upgraded, but they were still in line to be redeployed by a manager using the upgraded version of `cephadm`, causing it to repeatedly just failover between the manager daemons.

With this fix, the check is rectified and now, the manager daemons are aware of their correct version, and can redeploy other daemons using the upgraded version of `cephadm`.
.The manager daemons correctly identify they have been upgraded and no longer failover

Previously, a check for whether the current active manager was running the upgraded version of `cephadm` would not work, and sometimes, managers that were upgraded would think that they were on an older version. Due to this, the upgrade would get into a state where all the mgr daemons were upgraded, but they were still in line to be redeployed by a manager using the upgraded version of `cephadm`, causing it to repeatedly just failover between the manager daemons.

With this fix, the check is rectified and now, the manager daemons are aware of their correct version, and can redeploy other daemons using the upgraded version of `cephadm`.
Ranjini M N 2023-06-13 09:36:05 UTC CC rmandyam
errata-xmlrpc 2023-06-15 09:07:49 UTC Group redhat
CC tserlin
Status VERIFIED RELEASE_PENDING
errata-xmlrpc 2023-06-15 09:15:33 UTC Resolution --- ERRATA
Status RELEASE_PENDING CLOSED
Last Closed 2023-06-15 09:15:33 UTC
errata-xmlrpc 2023-06-15 09:16:57 UTC Link ID Red Hat Product Errata RHSA-2023:3623

Back to bug 2108489