Bug 1383468

Summary: [ansible 1.0.8-1] unactivated osd's
Product: Red Hat Storage Console Reporter: Vasu Kulkarni <vakulkar>
Component: ceph-ansibleAssignee: Sébastien Han <shan>
Status: CLOSED CURRENTRELEASE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, gmeno, kdreyer, nthomas, sankarshan, seb, vakulkar
Target Milestone: ---   
Target Release: 3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-ansible-2.1.9-1.el7scon Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-28 19:47:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vasu Kulkarni 2016-10-10 17:44:50 UTC
Description of problem:

not sure if this is related to bz 1383438

1) using the latest ansible from ktdreyer repo, 

I dont see any failures in playbook but the osd's are not activated after the run
 
hosts:
 
[mons]
ceph-vakulkar-run338-node1-mon monitor_interface=eth0
[osds]
ceph-vakulkar-run338-node3-osd monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]'
ceph-vakulkar-run338-node2-osd monitor_interface=eth0  devices='["/dev/vdb", "/dev/vdc", "/dev/vdd"]'


group_vars/all:


ceph_conf_overrides:
  global:
    osd_default_pool_size: 2
    osd_pool_default_pg_num: 128
    osd_pool_default_pgp_num: 128
ceph_origin: distro
ceph_stable: true
ceph_stable_rh_storage: true
ceph_test: true
journal_collocation: true
journal_size: 1024
osd_auto_discovery: false
public_network: 172.16.0.0/12



full logs at : https://paste.fedoraproject.org/447783/21048147/raw/

Comment 2 Vasu Kulkarni 2016-10-10 18:58:44 UTC
tried changing ceph_stable_rh_storage to ceph_rhcs in group_vars/all as per ken/andrew, that didn't help, so this one looks different from the bz 1383438

Comment 3 Alfredo Deza 2016-10-10 21:01:09 UTC
Upstream Github issue: https://github.com/ceph/ceph-ansible/issues/1025

Comment 5 seb 2016-10-13 14:10:34 UTC
ansible version?

Comment 6 Vasu Kulkarni 2016-10-13 17:10:41 UTC
This was from http://file.rdu.redhat.com/~kdreyer/scratch/rhscon-builds-for-rhceph-2.1/ which was 1.0.8-1, I think latest master, but now its moved back to 1.0.5

Comment 7 seb 2016-10-14 09:24:46 UTC
I meant ansible version not ceph-ansible :)

Comment 8 Ken Dreyer (Red Hat) 2017-03-03 17:15:39 UTC
We think this is fixed in the latest builds currently undergoing testing (ceph-ansible-2.1.9-1.el7scon as of this writing.) Would you please retest with these?

Comment 9 Vasu Kulkarni 2017-03-03 17:28:11 UTC
No more issues with 2.1.9-1.el7scon

Comment 10 Ken Dreyer (Red Hat) 2017-06-28 19:47:33 UTC
Fix shipped in https://access.redhat.com/errata/RHBA-2017:1496