Bug 1244287

Summary: ceph-deploy prepare results in activated OSDs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Travis Rhoden <trhoden>
Component: Ceph-InstallerAssignee: Vasu Kulkarni <vakulkar>
Status: CLOSED WONTFIX QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 1.3.0CC: adeza, aschoen, ceph-eng-bugs, fcami, flucifre, gmeno, hnallurv, kdreyer, nthomas, sankarshan, vakulkar
Target Milestone: rc   
Target Release: 1.3.4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-01-18 08:28:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Travis Rhoden 2015-07-17 16:33:37 UTC
When running "ceph-deploy osd prepare" on block devices to serve as OSDs, the OSDs are actually activated as well, resulting in them being UP and IN the Ceph cluster.

This presents a few problems:

1) This is not what our documentation says it does
2) We instruct users to run prepare and the activate, but activate will return errors if the OSD is already active
3) The preferred method is to use "ceph-deploy osd create" for block devices

I'm not 100% certain why the prepare step results in activated OSDs, but suspect something with udev.

Let's sort all this out!

Comment 2 Vasu Kulkarni 2015-12-07 23:53:50 UTC
loic, you pointed out that ceph-disk prepare activates the osd's, this is the case for 7.1 but not for 7.2, probably you can sort this out here.

Comment 3 Loic Dachary 2015-12-08 08:52:06 UTC
@Vasu the original issue filed by travis seems to be about an inconsistency between some documentation and the actual behavior. What you're seeing with 7.2 is different: the OSD does not activate automatically as it should on your setup. Did you manage to reproduce this behavior from a freshly installed 7.2 ? It would be great to have a way to reproduce the problem.

Comment 4 François Cami 2015-12-09 10:41:55 UTC
These tests are two weeks old, but still, on freshly installed 7.2:

$ ceph-deploy osd prepare ceph-osd0:vdb
vdb gets split into two partitions. Calling
$ ceph-deploy osd activate ceph-osd0:vdb1:vdb2
=> OK

However calling activate after:
$ ceph-deploy osd prepare ceph-osd0:vdb:vdc1
leads to errors (and duplicated OSDs).

The only safe way I've found to activate OSDs explicitely declared journals on the CLI is to reboot the node - I'm assuming starting the service would be enough.

HTH