Hide Forgot
Description of problem: not sure if this is related to bz 1383438 1) using the latest ansible from ktdreyer repo, I dont see any failures in playbook but the osd's are not activated after the run hosts: [mons] ceph-vakulkar-run338-node1-mon monitor_interface=eth0 [osds] ceph-vakulkar-run338-node3-osd monitor_interface=eth0 devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]' ceph-vakulkar-run338-node2-osd monitor_interface=eth0 devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]' group_vars/all: ceph_conf_overrides: global: osd_default_pool_size: 2 osd_pool_default_pg_num: 128 osd_pool_default_pgp_num: 128 ceph_origin: distro ceph_stable: true ceph_stable_rh_storage: true ceph_test: true journal_collocation: true journal_size: 1024 osd_auto_discovery: false public_network: 172.16.0.0/12 full logs at : https://paste.fedoraproject.org/447783/21048147/raw/
tried changing ceph_stable_rh_storage to ceph_rhcs in group_vars/all as per ken/andrew, that didn't help, so this one looks different from the bz 1383438
Upstream Github issue: https://github.com/ceph/ceph-ansible/issues/1025
ansible version?
This was from http://file.rdu.redhat.com/~kdreyer/scratch/rhscon-builds-for-rhceph-2.1/ which was 1.0.8-1, I think latest master, but now its moved back to 1.0.5
I meant ansible version not ceph-ansible :)
We think this is fixed in the latest builds currently undergoing testing (ceph-ansible-2.1.9-1.el7scon as of this writing.) Would you please retest with these?
No more issues with 2.1.9-1.el7scon
Fix shipped in https://access.redhat.com/errata/RHBA-2017:1496