When running "ceph-deploy osd prepare" on block devices to serve as OSDs, the OSDs are actually activated as well, resulting in them being UP and IN the Ceph cluster. This presents a few problems: 1) This is not what our documentation says it does 2) We instruct users to run prepare and the activate, but activate will return errors if the OSD is already active 3) The preferred method is to use "ceph-deploy osd create" for block devices I'm not 100% certain why the prepare step results in activated OSDs, but suspect something with udev. Let's sort all this out!
loic, you pointed out that ceph-disk prepare activates the osd's, this is the case for 7.1 but not for 7.2, probably you can sort this out here.
@Vasu the original issue filed by travis seems to be about an inconsistency between some documentation and the actual behavior. What you're seeing with 7.2 is different: the OSD does not activate automatically as it should on your setup. Did you manage to reproduce this behavior from a freshly installed 7.2 ? It would be great to have a way to reproduce the problem.
These tests are two weeks old, but still, on freshly installed 7.2: $ ceph-deploy osd prepare ceph-osd0:vdb vdb gets split into two partitions. Calling $ ceph-deploy osd activate ceph-osd0:vdb1:vdb2 => OK However calling activate after: $ ceph-deploy osd prepare ceph-osd0:vdb:vdc1 leads to errors (and duplicated OSDs). The only safe way I've found to activate OSDs explicitely declared journals on the CLI is to reboot the node - I'm assuming starting the service would be enough. HTH