Bug 1654785

Summary: unable to provision ceph-disk OSDs in non-containerized luminous deployments
Product: Red Hat Ceph Storage Reporter: Ram Raja <rraja>
Component: Ceph-AnsibleAssignee: S├ębastien Han <shan>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: high Docs Contact:
Priority: high    
Version: 3.2CC: aschoen, ceph-eng-bugs, ceph-qe-bugs, gmeno, hnallurv, nthomas, pasik, sankarshan, seb, tnielsen, tserlin
Target Milestone: rc   
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: RHEL: ceph-ansible-3.2.0-0.1.rc6.el7cp Ubuntu: ceph-ansible_3.2.0~rc6-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:02:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Ram Raja 2018-11-29 16:46:35 UTC
Description of problem:
Unable to provision ceph-disk OSDs in non-containerized luminous deployments. ceph-ansible fails when it tries to start the OSDs. 

Version-Release number of selected component (if applicable):

How reproducible: always

Steps to Reproduce:
Try to provision luminous ceph cluster with ceph-disk OSDs (i.e., osd_scenario != lvm) in non containerized deployment (i.e., containerized_deployment: False)

Actual results: The OSDs fail to start with an error message like,
Unable to start service ceph-osd@sdb: Job for ceph-osd@sdb.service failed because the control process exited with error code. See "systemctl status ceph-osd@sdb.service" and "journalctl -xe" for details.

Expected results: OSDs start and the cluster is provisioned

Comment 3 Ram Raja 2018-11-29 16:48:10 UTC
Upstream fix is here,

Comment 9 Vasishta 2018-12-03 16:55:26 UTC
Working fine with ceph-ansible-3.2.0-0.1.rc6.el7cp.noarch

Vasishta Shastry
QE, Ceph

Comment 11 errata-xmlrpc 2019-01-03 19:02:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.