Bug 1862046 - FFU fails running docker2podman playbook when bluestore/lvm is used
Summary: FFU fails running docker2podman playbook when bluestore/lvm is used
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z6
: 3.3
Assignee: Dimitri Savineau
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks: 1578730
TreeView+ depends on / blocked
 
Reported: 2020-07-30 09:40 UTC by Francesco Pantano
Modified: 2020-08-18 18:06 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.47-1.el7cp Ubuntu: ceph-ansible_3.2.47-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-18 18:05:58 UTC
Embargoed:


Attachments (Terms of Use)
ceph-0 docker2podman execution (58.40 KB, text/plain)
2020-07-30 09:40 UTC, Francesco Pantano
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5608 0 None closed docker2podman: set disk_list for non lvm scenario 2021-01-20 09:47:22 UTC
Red Hat Product Errata RHSA-2020:3504 0 None None None 2020-08-18 18:06:10 UTC

Description Francesco Pantano 2020-07-30 09:40:16 UTC
Created attachment 1702920 [details]
ceph-0 docker2podman execution

Description of problem:

During FFU 13 -> 16.1 with a director deployed Ceph with bluestore:

 CephAnsibleDisksConfig:                                                                                                                                                  
        devices:
            - '/dev/vdb'
            - '/dev/vdc'
            - '/dev/vdd'
            - '/dev/vde'
            - '/dev/vdf'
        osd_scenario: lvm
        osd_objectstore: bluestore

using ceph-ansible "ceph-ansible-3.2.46-1.el7cp.noarch", when the ceph nodes upgrade is reached and the command:

```
openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph_systemd -e ceph_ansible_limit=ceph-0
```

is executed, it fails looking for the disk_list undefined variable:

```
 fatal: [ceph-0]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'disk_list' is undefined"}

```

Here the ceph versions:

```
[root@controller-0 ~]# podman exec -it ceph-mon-controller-0 ceph versions
{
    "mon": {
        "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3
    },
    "mgr": {
        "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3 
    },
    "osd": { 
        "ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15
    },
    "mds": {},
    "overall": {
        "ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15,
        "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 6 
    }
}
```

and the container version used is:

```
undercloud-0.ctlplane.redhat.local:8787/rhceph/rhceph-3-rhel7:3-40
```


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 12 errata-xmlrpc 2020-08-18 18:05:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 3.3 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3504


Note You need to log in before you can comment on or make changes to this bug.