Back to bug 1656468

Who When What Removed Added
Drew Harris 2019-02-26 16:20:59 UTC Priority unspecified high
CC anharris
Severity unspecified high
Sébastien Han 2019-02-26 16:41:17 UTC CC shan
Guillaume Abrioux 2019-03-26 15:36:14 UTC CC gabrioux
Flags needinfo?(aschoen)
Andrew Schoen 2019-03-26 15:41:28 UTC Flags needinfo?(aschoen)
Guillaume Abrioux 2019-03-26 15:50:35 UTC Status NEW ASSIGNED
Assignee shan aschoen
Andrew Schoen 2019-04-01 20:03:54 UTC Link ID Github ceph/ceph-ansible/pull/3727
Andrew Schoen 2019-04-03 18:54:24 UTC Doc Text Feature:
During upgrade to 4.x all running osds previously created by ceph-disk will be migrated to ceph-volume.

A few things to note about this process:
- the user will need to change the value of osd_scenario to 'lvm'
- no new OSDs can be created during the upgrade to 4.x
- after the upgrade any devices previously used by ceph-disk need to be removed from the 'devices' config option of ceph-ansible. These devices can not be present in the config for subsequent deployments.

Reason:

ceph-disk is no longer available in 4.x and currently running osds that were created by ceph-disk need to give ceph-volume control of their systemd units.

Result:

After an upgrade to 4.x OSDs that were previously created by ceph-disk will still start up and operate like any ceph-volume created OSD.
Doc Type If docs needed, set a value Enhancement
Christina Meno 2019-05-30 13:17:19 UTC Status ASSIGNED POST
Giridhar Ramaraju 2019-08-05 13:06:48 UTC Status POST MODIFIED
CC tserlin
Fixed In Version ceph-ansible-4.0.0-0.1.rc7.el8cp
CC ceph-qe-bugs
Flags needinfo?(ceph-qe-bugs)
QA Contact ceph-qe-bugs hgurav
Hemant G 2019-08-06 08:36:45 UTC QA Contact hgurav vashastr
Vasishta 2019-08-23 12:27:10 UTC Flags needinfo?(ceph-qe-bugs)
errata-xmlrpc 2019-08-23 14:25:42 UTC Status MODIFIED ON_QA
PnT Account Manager 2019-08-31 17:45:32 UTC CC sankarshan
Anjana Suparna Sriram 2019-11-08 15:00:31 UTC Blocks 1770263
Bara Ancincova 2019-12-10 09:27:48 UTC Blocks 1730176
Docs Contact bancinco
Bara Ancincova 2019-12-10 13:29:46 UTC Doc Text Feature:
During upgrade to 4.x all running osds previously created by ceph-disk will be migrated to ceph-volume.

A few things to note about this process:
- the user will need to change the value of osd_scenario to 'lvm'
- no new OSDs can be created during the upgrade to 4.x
- after the upgrade any devices previously used by ceph-disk need to be removed from the 'devices' config option of ceph-ansible. These devices can not be present in the config for subsequent deployments.

Reason:

ceph-disk is no longer available in 4.x and currently running osds that were created by ceph-disk need to give ceph-volume control of their systemd units.

Result:

After an upgrade to 4.x OSDs that were previously created by ceph-disk will still start up and operate like any ceph-volume created OSD.
.OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

Before starting this process, change the value of the `osd_scenario` option to `lvm` and after the upgrade, remove any devices previously used by `ceph-disk` from the list of devices specified by the `devices` option. Do not also use these devices in configuration for subsequent deployments. Note also, that you cannot create any new OSDs during the upgrade process.

After the upgrade, all OSDs created by `ceph-disk` will start and operate like any OSDs created by `ceph-volume`.
Flags needinfo?(aschoen)
Ameena Suhani S H 2020-01-12 18:12:57 UTC Depends On 1790212
Ameena Suhani S H 2020-01-12 18:13:35 UTC CC amsyedha
QA Contact vashastr amsyedha
Ameena Suhani S H 2020-01-17 03:46:00 UTC Status ON_QA VERIFIED
Assignee aschoen amsyedha
Ameena Suhani S H 2020-01-17 03:48:41 UTC CC vashastr
Bara Ancincova 2020-01-29 08:57:51 UTC Doc Text .OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

Before starting this process, change the value of the `osd_scenario` option to `lvm` and after the upgrade, remove any devices previously used by `ceph-disk` from the list of devices specified by the `devices` option. Do not also use these devices in configuration for subsequent deployments. Note also, that you cannot create any new OSDs during the upgrade process.

After the upgrade, all OSDs created by `ceph-disk` will start and operate like any OSDs created by `ceph-volume`.
.OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

Before starting this process, change the value of the `osd_scenario` option to `lvm` and after the upgrade, remove any devices previously used by `ceph-disk` from the list of devices specified by the `devices` option. Also, do not use these devices in configuration for subsequent deployments. Note that you cannot create any new OSDs during the upgrade process.

After the upgrade, all OSDs created by `ceph-disk` will start and operate like any OSDs created by `ceph-volume`.
Andrew Schoen 2020-01-29 15:15:27 UTC Flags needinfo?(aschoen)
Erin Donnelly 2020-01-30 14:20:15 UTC CC edonnell
Doc Text .OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

Before starting this process, change the value of the `osd_scenario` option to `lvm` and after the upgrade, remove any devices previously used by `ceph-disk` from the list of devices specified by the `devices` option. Also, do not use these devices in configuration for subsequent deployments. Note that you cannot create any new OSDs during the upgrade process.

After the upgrade, all OSDs created by `ceph-disk` will start and operate like any OSDs created by `ceph-volume`.
.OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

Before starting this process, change the value of the `osd_scenario` option to `lvm`, and after the upgrade, remove any devices previously used by `ceph-disk` from the list of devices specified by the `devices` option. Also, do not use these devices in configuration for subsequent deployments. Note that you cannot create any new OSDs during the upgrade process.

After the upgrade, all OSDs created by `ceph-disk` will start and operate like any OSDs created by `ceph-volume`.
errata-xmlrpc 2020-01-31 11:27:09 UTC Status VERIFIED RELEASE_PENDING
errata-xmlrpc 2020-01-31 12:45:18 UTC Status RELEASE_PENDING CLOSED
Resolution --- ERRATA
Last Closed 2020-01-31 12:45:18 UTC
errata-xmlrpc 2020-01-31 12:45:45 UTC Link ID Red Hat Product Errata RHBA-2020:0312
Aron Gunn 2020-04-08 16:53:36 UTC CC agunn
Doc Text .OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

Before starting this process, change the value of the `osd_scenario` option to `lvm`, and after the upgrade, remove any devices previously used by `ceph-disk` from the list of devices specified by the `devices` option. Also, do not use these devices in configuration for subsequent deployments. Note that you cannot create any new OSDs during the upgrade process.

After the upgrade, all OSDs created by `ceph-disk` will start and operate like any OSDs created by `ceph-volume`.
.OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running Ceph OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

For bare-metal and container deployments of {product} 4, the `ceph-volume` utility does a simple scan and takes over the existing Ceph OSDs deployed by the `ceph-disk` utility. Also, do not use these migrated devices in configurations for subsequent deployments. Note that you cannot create any new Ceph OSDs during the upgrade process.

After the upgrade, all Ceph OSDs created by `ceph-disk` will start and operate like any Ceph OSDs created by `ceph-volume`.
Aron Gunn 2020-04-08 17:00:00 UTC Doc Text .OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running Ceph OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

For bare-metal and container deployments of {product} 4, the `ceph-volume` utility does a simple scan and takes over the existing Ceph OSDs deployed by the `ceph-disk` utility. Also, do not use these migrated devices in configurations for subsequent deployments. Note that you cannot create any new Ceph OSDs during the upgrade process.

After the upgrade, all Ceph OSDs created by `ceph-disk` will start and operate like any Ceph OSDs created by `ceph-volume`.
.Ceph OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running Ceph OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

For bare-metal and container deployments of {product} 4, the `ceph-volume` utility does a simple scan and takes over the existing Ceph OSDs deployed by the `ceph-disk` utility. Also, do not use these migrated devices in configurations for subsequent deployments. Note that you cannot create any new Ceph OSDs during the upgrade process.

After the upgrade, all Ceph OSDs created by `ceph-disk` will start and operate like any Ceph OSDs created by `ceph-volume`.
Aron Gunn 2020-04-08 17:00:40 UTC Doc Text .Ceph OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running Ceph OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

For bare-metal and container deployments of {product} 4, the `ceph-volume` utility does a simple scan and takes over the existing Ceph OSDs deployed by the `ceph-disk` utility. Also, do not use these migrated devices in configurations for subsequent deployments. Note that you cannot create any new Ceph OSDs during the upgrade process.

After the upgrade, all Ceph OSDs created by `ceph-disk` will start and operate like any Ceph OSDs created by `ceph-volume`.
.Ceph OSDs created with `ceph-disk` are migrated to `ceph-volume` during upgrade

When upgrading to {product} 4, all running Ceph OSDs previously created by the `ceph-disk` utility will be migrated to the `ceph-volume` utility because `ceph-disk` has been deprecated in this release.

For bare-metal and container deployments of {product}, the `ceph-volume` utility does a simple scan and takes over the existing Ceph OSDs deployed by the `ceph-disk` utility. Also, do not use these migrated devices in configurations for subsequent deployments. Note that you cannot create any new Ceph OSDs during the upgrade process.

After the upgrade, all Ceph OSDs created by `ceph-disk` will start and operate like any Ceph OSDs created by `ceph-volume`.

Back to bug 1656468