Bug 1584179 - openstack creation pools fails when collocating an OSD with a MON in a containerized deployment
Summary: openstack creation pools fails when collocating an OSD with a MON in a contai...
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible
Version: 3.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 3.1
Assignee: leseb
QA Contact: Rachana Patel
Depends On:
Blocks: 1548353 1590939 1592846
TreeView+ depends on / blocked
Reported: 2018-05-30 12:39 UTC by Guillaume Abrioux
Modified: 2019-10-24 05:38 UTC (History)
12 users (show)

Fixed In Version: RHEL: ceph-ansible-3.1.0-0.1.rc5.el7cp Ubuntu: ceph-ansible_3.1.0~rc5-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1590939 (view as bug list)
Last Closed: 2019-01-08 17:24:01 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github ceph ceph-ansible pull 2661 None None None 2018-05-30 12:42:20 UTC

Description Guillaume Abrioux 2018-05-30 12:39:28 UTC
Description of problem:
Playbook fails when attempting to create openstack pools on a collocated scenario (OSD with a MON) in a containerized deployment.

Version-Release number of selected component (if applicable):
ceph-ansible v3.1.0rc4

How reproducible:
1/ deploy a containerized cluster with MON and OSD collocated on a node
2/ set opentack_config: true in group_vars/all.yml

Actual results:
playbook fails when trying to create openstack pools because the container name used in the following tasks is incorrect:

- https://github.com/ceph/ceph-ansible/commit/564a662baf10b9085a6da8c9152400914e310d15#diff-5a429d6364fa796579c46ab1ba5b99c8R4
- https://github.com/ceph/ceph-ansible/commit/564a662baf10b9085a6da8c9152400914e310d15#diff-5a429d6364fa796579c46ab1ba5b99c8R13

the fact {{ docker_exec_cmd )} gets reset because of https://github.com/ceph/ceph-ansible/commit/564a662baf10b9085a6da8c9152400914e310d15#diff-f348734057e459711e32b60331eb3004R2

Expected results:
openstack pools are created properly

Comment 3 Guillaume Abrioux 2018-05-30 12:42:21 UTC
fix will be in v3.1.0rc5

Comment 7 Rachana Patel 2018-06-01 16:58:03 UTC
Verified with ceph-ansible-3.1.0-0.1.rc5.el7cp on RHEL

cluster is up and running, pools are created. hence moving to verified

[root@compute-hci-0 ~]# ceph -s
    id:     a32fabc2-64fc-11e8-b23c-0cc47af3eb3a
    health: HEALTH_WARN
            clock skew detected on mon.compute-hci-1, mon.compute-hci-0, mon.compute-hci-3, mon.compute-hci-4

    mon: 5 daemons, quorum compute-hci-2,compute-hci-1,compute-hci-0,compute-hci-3,compute-hci-4
    mgr: compute-hci-2(active), standbys: compute-hci-1, compute-hci-4, compute-hci-0, compute-hci-3
    mds: cephfs-3/3/3 up  {0=compute-hci-1=up:active,1=compute-hci-2=up:active,2=compute-hci-3=up:active}, 2 up:standby
    osd: 5 osds: 5 up, 5 in
    rgw: 5 daemons active

    pools:   11 pools, 544 pgs
    objects: 280 objects, 8735 bytes
    usage:   551 MB used, 9309 GB / 9310 GB avail
    pgs:     544 active+clean

[root@compute-hci-0 ~]# ceph osd lspools
1 images,2 metrics,3 backups,4 vms,5 volumes,6 manila_data,7 manila_metadata,8 .rgw.root,9 default.rgw.control,10 default.rgw.meta,11 default.rgw.log,

Comment 8 Ken Dreyer (Red Hat) 2019-01-08 17:24:01 UTC
ceph-ansible 3.1.5 shipped in https://access.redhat.com/errata/RHBA-2018:2819

Note You need to log in before you can comment on or make changes to this bug.