Back to bug 2226366

Who When What Removed Added
Eric Harney 2023-07-25 19:57:33 UTC Priority unspecified high
Eric Harney 2023-07-25 19:57:42 UTC Assignee cinder-bugs eharney
Red Hat One Jira (issues.redhat.com) 2023-07-25 19:57:49 UTC Link ID Red Hat Issue Tracker OSP-26895
Alex Stupnikov 2023-07-26 07:07:27 UTC CC astupnik
Gregory Charot 2023-07-26 15:57:44 UTC CC gcharot
Giulio Fidente 2023-08-03 15:30:57 UTC CC gfidente
Eric Harney 2023-08-03 18:45:49 UTC Keywords Regression
melanie witt 2023-08-04 07:01:07 UTC CC mwitt
Giulio Fidente 2023-08-04 07:34:50 UTC Keywords Triaged
Giulio Fidente 2023-08-04 07:35:10 UTC Target Release --- 17.1
Giulio Fidente 2023-08-04 07:35:42 UTC Status NEW ASSIGNED
RHEL Program Management 2023-08-04 07:35:52 UTC Target Release 17.1 ---
Giulio Fidente 2023-08-04 07:37:33 UTC Status ASSIGNED ON_DEV
Giulio Fidente 2023-08-04 07:39:07 UTC Target Milestone --- ga
Brian Rosmaita 2023-08-04 14:27:58 UTC Target Milestone ga z1
Target Release --- 17.1
CC brian.rosmaita
RHEL Program Management 2023-08-04 14:28:08 UTC Target Release 17.1 ---
Brian Rosmaita 2023-08-04 14:32:14 UTC Blocks 2229174
Brian Rosmaita 2023-08-04 14:36:56 UTC Blocks 2229174
Andy Stillman 2023-08-09 13:24:37 UTC Doc Text There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following are met conditions:

* RHCS is the back end for instance volumes
* RHCS has multiple storage pools for volumes
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location
* The retype call uses the migration_policy 'on-demand'
* The volume is attached to an instance

Workaround: Ensure that the listed conditions are not met.
Doc Type If docs needed, set a value Known Issue
aruffin 2023-08-10 13:45:28 UTC CC aruffin
Ian Frangs 2023-08-11 08:15:17 UTC CC ifrangs
Doc Text There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following are met conditions:

* RHCS is the back end for instance volumes
* RHCS has multiple storage pools for volumes
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location
* The retype call uses the migration_policy 'on-demand'
* The volume is attached to an instance

Workaround: Ensure that the listed conditions are not met.
There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:

* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the migration_policy `on-demand`.
* The volume is attached to an instance.

Workaround: Ensure that the listed conditions are not met.
Ian Frangs 2023-08-11 10:36:20 UTC Doc Text There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:

* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the migration_policy `on-demand`.
* The volume is attached to an instance.

Workaround: Ensure that the listed conditions are not met.
There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:

* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the migration_policy `on-demand`.
* The volume is attached to an instance.

Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
Ian Frangs 2023-08-11 10:41:30 UTC Doc Text There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:

* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the migration_policy `on-demand`.
* The volume is attached to an instance.

Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:

* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the `on-demand` migration_policy.
* The volume is attached to an instance.

Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
Lukas Svaty 2023-08-11 11:29:21 UTC CC lsvaty
Mike Burns 2023-08-11 13:59:33 UTC Target Milestone z1 z2
Eric Harney 2023-08-14 18:15:57 UTC Target Milestone z2 z1
Eric Harney 2023-08-14 18:17:24 UTC Status ON_DEV POST
Ian Frangs 2023-08-15 09:31:13 UTC Doc Text There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:

* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the `on-demand` migration_policy.
* The volume is attached to an instance.

Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:
+
* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the `on-demand` migration_policy.
* The volume is attached to an instance.
+
Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
Eric Harney 2023-08-15 13:25:46 UTC Target Release --- 17.1
RHEL Program Management 2023-08-15 13:25:56 UTC Target Release 17.1 ---
Ian Frangs 2023-08-16 13:05:42 UTC Doc Text There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:
+
* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the `on-demand` migration_policy.
* The volume is attached to an instance.
+
Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
There is currently a known issue when using a Red Hat Ceph Storage (RHCS) back end for volumes that can prevent instances from being rebooted, and may lead to data corruption. This occurs when all of the following conditions are met:
+
* RHCS is the back end for instance volumes.
* RHCS has multiple storage pools for volumes.
* A volume is being retyped where the new type requires the volume to be stored in a different pool than its current location.
* The retype call uses the `on-demand` migration_policy.
* The volume is attached to an instance.

+
Workaround: Do not retype `in-use` volumes that meet all of these listed conditions.
Paul Grist 2023-08-16 19:09:15 UTC CC pgrist
Eric Harney 2023-08-16 21:40:08 UTC Status POST MODIFIED
Fixed In Version openstack-cinder-18.2.2-17.1.20230816200905.f6b44fc.el9osttrunk

Back to bug 2226366