Bug 1862046

Summary: FFU fails running docker2podman playbook when bluestore/lvm is used
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Francesco Pantano <fpantano>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: high Docs Contact:
Priority: high    
Version: 3.3CC: anharris, aschoen, ceph-eng-bugs, ceph-qe-bugs, dsavinea, gfidente, gmeno, jfrancoa, johfulto, nthomas, tserlin, vashastr, ykaul
Target Milestone: z6   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.47-1.el7cp Ubuntu: ceph-ansible_3.2.47-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-08-18 18:05:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1578730    
Attachments:
Description Flags
ceph-0 docker2podman execution none

Description Francesco Pantano 2020-07-30 09:40:16 UTC
Created attachment 1702920 [details]
ceph-0 docker2podman execution

Description of problem:

During FFU 13 -> 16.1 with a director deployed Ceph with bluestore:

 CephAnsibleDisksConfig:                                                                                                                                                  
        devices:
            - '/dev/vdb'
            - '/dev/vdc'
            - '/dev/vdd'
            - '/dev/vde'
            - '/dev/vdf'
        osd_scenario: lvm
        osd_objectstore: bluestore

using ceph-ansible "ceph-ansible-3.2.46-1.el7cp.noarch", when the ceph nodes upgrade is reached and the command:

```
openstack overcloud external-upgrade run --stack qe-Cloud-0 --tags ceph_systemd -e ceph_ansible_limit=ceph-0
```

is executed, it fails looking for the disk_list undefined variable:

```
 fatal: [ceph-0]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'disk_list' is undefined"}

```

Here the ceph versions:

```
[root@controller-0 ~]# podman exec -it ceph-mon-controller-0 ceph versions
{
    "mon": {
        "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3
    },
    "mgr": {
        "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 3 
    },
    "osd": { 
        "ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15
    },
    "mds": {},
    "overall": {
        "ceph version 12.2.12-101.el7cp (20a4945f2321019ed50c1844b413059c07304074) luminous (stable)": 15,
        "ceph version 12.2.12-84.el7cp (1ce826ed564c8063ac6c876df66bd8ab31b6cc66) luminous (stable)": 6 
    }
}
```

and the container version used is:

```
undercloud-0.ctlplane.redhat.local:8787/rhceph/rhceph-3-rhel7:3-40
```


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 12 errata-xmlrpc 2020-08-18 18:05:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 3.3 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3504