DescriptionFrancesco Pantano
2020-07-30 09:40:16 UTC
Created attachment 1702920[details]
ceph-0 docker2podman execution
Description of problem:
During FFU 13 -> 16.1 with a director deployed Ceph with bluestore:
CephAnsibleDisksConfig:
devices:
- '/dev/vdb'
- '/dev/vdc'
- '/dev/vdd'
- '/dev/vde'
- '/dev/vdf'
osd_scenario: lvm
osd_objectstore: bluestore
using ceph-ansible "ceph-ansible-3.2.46-1.el7cp.noarch", when the ceph nodes upgrade is reached and the command:
```
openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph_systemd -e ceph_ansible_limit=ceph-0
```
is executed, it fails looking for the disk_list undefined variable:
```
fatal: [ceph-0]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'disk_list' is undefined"}
```
Here the ceph versions:
```
[root@controller-0 ~]# podman exec -it ceph-mon-controller-0 ceph versions
{
"mon": {
"ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3
},
"mgr": {
"ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3
},
"osd": {
"ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15
},
"mds": {},
"overall": {
"ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15,
"ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 6
}
}
```
and the container version used is:
```
undercloud-0.ctlplane.redhat.local:8787/rhceph/rhceph-3-rhel7:3-40
```
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: Red Hat Ceph Storage 3.3 security and bug fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2020:3504
Created attachment 1702920 [details] ceph-0 docker2podman execution Description of problem: During FFU 13 -> 16.1 with a director deployed Ceph with bluestore: CephAnsibleDisksConfig: devices: - '/dev/vdb' - '/dev/vdc' - '/dev/vdd' - '/dev/vde' - '/dev/vdf' osd_scenario: lvm osd_objectstore: bluestore using ceph-ansible "ceph-ansible-3.2.46-1.el7cp.noarch", when the ceph nodes upgrade is reached and the command: ``` openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph_systemd -e ceph_ansible_limit=ceph-0 ``` is executed, it fails looking for the disk_list undefined variable: ``` fatal: [ceph-0]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'disk_list' is undefined"} ``` Here the ceph versions: ``` [root@controller-0 ~]# podman exec -it ceph-mon-controller-0 ceph versions { "mon": { "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3 }, "mgr": { "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3 }, "osd": { "ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15 }, "mds": {}, "overall": { "ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15, "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 6 } } ``` and the container version used is: ``` undercloud-0.ctlplane.redhat.local:8787/rhceph/rhceph-3-rhel7:3-40 ``` Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: