Back to bug 2222589

Who When What Removed Added
Red Hat One Jira (issues.redhat.com) 2023-07-13 08:10:38 UTC Link ID Red Hat Issue Tracker OSP-26578
Juan Badia Payno 2023-07-17 07:54:11 UTC Link ID OpenStack gerrit 887565
Manoj Katari 2023-07-17 08:01:15 UTC CC mkatari
Assignee rhos-maint mkatari
John Fulton 2023-07-17 11:50:31 UTC CC johfulto
Manoj Katari 2023-07-17 11:56:50 UTC Blocks 2223332
Manoj Katari 2023-07-17 12:33:48 UTC Priority unspecified high
Keywords Triaged
Target Release --- 17.1
Severity unspecified high
Component openstack-tripleo-heat-templates tripleo-ansible
RHEL Program Management 2023-07-17 12:34:00 UTC Target Release 17.1 ---
Jesse Pretorius 2023-07-18 09:05:51 UTC CC jpretori
Jesse Pretorius 2023-07-18 09:09:41 UTC Doc Type If docs needed, set a value Known Issue
Giulio Fidente 2023-07-18 09:33:09 UTC Blocks 2223332
Depends On 2223332
Manoj Katari 2023-07-19 09:20:21 UTC Target Release --- 17.1
Target Milestone --- z1
Status NEW ON_DEV
RHEL Program Management 2023-07-19 09:20:28 UTC Target Release 17.1 --- --- 17.1
Francesco Pantano 2023-07-20 08:50:11 UTC CC fpantano
Manoj Katari 2023-07-20 10:26:26 UTC Status ON_DEV POST
Giulio Fidente 2023-07-20 11:06:00 UTC CC gfidente
Khomesh Thakre 2023-07-20 13:56:11 UTC CC kthakre
Jesse Pretorius 2023-07-25 12:15:20 UTC QA Contact jhakimra kthakre
Ollie Walsh 2023-07-26 09:43:11 UTC CC owalsh
Erin Peterson 2023-07-31 16:52:42 UTC Doc Text There is currently a known issue where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 during the upgrade of Red Hat OpenStack Platform 16.2 to 17.1 in a director-deployed Ceph Storage environment that uses IPv6.

Workaround:

Perform the following steps to resolve this issue.

. Log in to the undercloud as the `stack` user with SSH.

. Determine the Ceph Storage orchestrator status by viewing the logs available at `/home/stack/overcloud-deploy/<stack_name>/config-download/<stack_name>/cephadm/cephadm_command.log`.
+
Replace `<stack_name>` with the stack name of the undercloud.

. Review the log for a task called `Get the ceph orchestrator status` similar to the following example:
+
`2023-07-12 23:19:26,936 p=463425 u=stack n=ansible | 2023-07-12 23:19:26.935348 | 525400d7-420c-9c3f-b529-0000000001ab | TASK | Get the ceph orchestrator status`

. Monitor the log. If this task does not proceed for several minutes, and no additional tasks appear in the log after it, continue with the remainder of this procedure.

. Log in to a controller node with SSH.

. Restart the Ceph Storage orchestrator.
+
`sudo cephadm shell -- ceph mgr fail <controller_node_name>`
+
Replace `<controller_node_name>` with the name of the controller node.

. Return to monitoring the command logs in the undercloud. You should observe the upgrade process continuing.
CC erpeters
Denise Hughes 2023-08-03 14:08:08 UTC CC dhughes
Ian Frangs 2023-08-03 15:46:23 UTC Flags needinfo?(mkatari)
Manoj Katari 2023-08-07 05:35:09 UTC Flags needinfo?(mkatari)
Jenny-Anne Lynch 2023-08-08 09:06:51 UTC CC jelynch
Doc Text There is currently a known issue where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 during the upgrade of Red Hat OpenStack Platform 16.2 to 17.1 in a director-deployed Ceph Storage environment that uses IPv6.

Workaround:

Perform the following steps to resolve this issue.

. Log in to the undercloud as the `stack` user with SSH.

. Determine the Ceph Storage orchestrator status by viewing the logs available at `/home/stack/overcloud-deploy/<stack_name>/config-download/<stack_name>/cephadm/cephadm_command.log`.
+
Replace `<stack_name>` with the stack name of the undercloud.

. Review the log for a task called `Get the ceph orchestrator status` similar to the following example:
+
`2023-07-12 23:19:26,936 p=463425 u=stack n=ansible | 2023-07-12 23:19:26.935348 | 525400d7-420c-9c3f-b529-0000000001ab | TASK | Get the ceph orchestrator status`

. Monitor the log. If this task does not proceed for several minutes, and no additional tasks appear in the log after it, continue with the remainder of this procedure.

. Log in to a controller node with SSH.

. Restart the Ceph Storage orchestrator.
+
`sudo cephadm shell -- ceph mgr fail <controller_node_name>`
+
Replace `<controller_node_name>` with the name of the controller node.

. Return to monitoring the command logs in the undercloud. You should observe the upgrade process continuing.
There is currently a known issue where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 during the upgrade of RHOSP 16.2 to 17.1 in a director-deployed Ceph Storage environment that uses IPv6.

Workaround:

Perform the following steps to resolve this issue.

. Log in to the undercloud as the `stack` user with SSH.

. Determine the Ceph Storage orchestrator status by viewing the logs available at `/home/stack/overcloud-deploy/<stack_name>/config-download/<stack_name>/cephadm/cephadm_command.log`.
+
Replace `<stack_name>` with the stack name of the undercloud.

. Review the log for a task called `Get the ceph orchestrator status` similar to the following example:
+
`2023-07-12 23:19:26,936 p=463425 u=stack n=ansible | 2023-07-12 23:19:26.935348 | 525400d7-420c-9c3f-b529-0000000001ab | TASK | Get the ceph orchestrator status`

. Monitor the log. If this task does not proceed for several minutes, and no additional tasks appear in the log after it, continue with the remainder of this procedure.

. Log in to a controller node with SSH.

. Restart the Ceph Storage orchestrator.
+
`sudo cephadm shell -- ceph mgr fail <controller_node_name>`
+
Replace `<controller_node_name>` with the name of the controller node.

. Return to monitoring the command logs in the undercloud. You should observe the upgrade process continuing.
Jenny-Anne Lynch 2023-08-08 10:01:50 UTC Doc Text There is currently a known issue where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 during the upgrade of RHOSP 16.2 to 17.1 in a director-deployed Ceph Storage environment that uses IPv6.

Workaround:

Perform the following steps to resolve this issue.

. Log in to the undercloud as the `stack` user with SSH.

. Determine the Ceph Storage orchestrator status by viewing the logs available at `/home/stack/overcloud-deploy/<stack_name>/config-download/<stack_name>/cephadm/cephadm_command.log`.
+
Replace `<stack_name>` with the stack name of the undercloud.

. Review the log for a task called `Get the ceph orchestrator status` similar to the following example:
+
`2023-07-12 23:19:26,936 p=463425 u=stack n=ansible | 2023-07-12 23:19:26.935348 | 525400d7-420c-9c3f-b529-0000000001ab | TASK | Get the ceph orchestrator status`

. Monitor the log. If this task does not proceed for several minutes, and no additional tasks appear in the log after it, continue with the remainder of this procedure.

. Log in to a controller node with SSH.

. Restart the Ceph Storage orchestrator.
+
`sudo cephadm shell -- ceph mgr fail <controller_node_name>`
+
Replace `<controller_node_name>` with the name of the controller node.

. Return to monitoring the command logs in the undercloud. You should observe the upgrade process continuing.
There is currently a known issue with the upgrade from RHOSP 16.2 to 17.1, where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 in a director-deployed Ceph Storage environment that uses IPv6. Workaround: Apply the workaround from Red Hat Knowledge-Centered Service (KCS) solution 7027594 - link:https://access.redhat.com/solutions/7027594[Director upgrade script stops during RHOSP upgrade when upgrading RHCS in director-deployed environment that uses IPv6]
Jenny-Anne Lynch 2023-08-08 10:13:39 UTC Summary Upgrade [OSP16.2 -> OSP17.1] cephadm got stuck at "ceph orch status" after ceph adoption in the "openstack overcloud upgrade" Upgrade [OSP16.2 -> OSP17.1] After ceph adoption, cephadm stops at 'ceph orch status'
Mike Burns 2023-08-11 13:59:33 UTC Target Milestone z1 z2
Mike Burns 2023-08-11 14:56:05 UTC Target Milestone z2 z1
Jenny-Anne Lynch 2023-08-16 12:09:35 UTC Doc Text There is currently a known issue with the upgrade from RHOSP 16.2 to 17.1, where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 in a director-deployed Ceph Storage environment that uses IPv6. Workaround: Apply the workaround from Red Hat Knowledge-Centered Service (KCS) solution 7027594 - link:https://access.redhat.com/solutions/7027594[Director upgrade script stops during RHOSP upgrade when upgrading RHCS in director-deployed environment that uses IPv6] There is currently a known issue with the upgrade from RHOSP 16.2 to 17.1, where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 in a director-deployed Ceph Storage environment that uses IPv6. Workaround: Apply the workaround from KCS solution 7027594: link:https://access.redhat.com/solutions/7027594[Director upgrade script stops during RHOSP upgrade when upgrading RHCS in director-deployed environment that uses IPv6]
Jenny-Anne Lynch 2023-08-16 12:12:57 UTC Doc Text There is currently a known issue with the upgrade from RHOSP 16.2 to 17.1, where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 in a director-deployed Ceph Storage environment that uses IPv6. Workaround: Apply the workaround from KCS solution 7027594: link:https://access.redhat.com/solutions/7027594[Director upgrade script stops during RHOSP upgrade when upgrading RHCS in director-deployed environment that uses IPv6] There is currently a known issue with the upgrade from RHOSP 16.2 to 17.1, where the director upgrade script stops executing when upgrading Red Hat Ceph Storage 4 to 5 in a director-deployed Ceph Storage environment that uses IPv6. Workaround: Apply the workaround from Red Hat KCS solution 7027594: link:https://access.redhat.com/solutions/7027594[Director upgrade script stops during RHOSP upgrade when upgrading RHCS in director-deployed environment that uses IPv6]

Back to bug 2222589