In FFWD procedure we do:
1) docker to podman playbook
2) leapp to EL8
3) upgrade openstack
4) upgrade ceph from ceph3 to ceph4.1
This requires that environments that were deployed with ceph-disk will work after leapp but the podman bits don't seem to support non lvm backed volumes.
Bottom line:
- If you deployed RHCSv3 with OSP13 and used ceph-disk [1], then this bug will block your upgrade.
- If you deployed RHCSv3 with OSP13 and used ceph-volume [2], you can ignore this bug
[1]
parameter_defaults:
CephAnsibleDisksConfig:
osd_scenario: collocated
or
parameter_defaults:
CephAnsibleDisksConfig:
osd_scenario: non-collocated
[2]
parameter_defaults:
CephAnsibleDisksConfig:
osd_scenario: lvm
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (updated rhceph-3.3 container image), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2020:3506