Bug 1244287 - ceph-deploy prepare results in activated OSDs
Summary: ceph-deploy prepare results in activated OSDs
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Installer
Version: 1.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 1.3.4
Assignee: Vasu Kulkarni
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-17 16:33 UTC by Travis Rhoden
Modified: 2022-02-21 18:19 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-18 08:28:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1327628 0 high CLOSED With dmcrypt flag enable ceph-disk fails to create second journal partition in same journal device 2021-02-22 00:41:40 UTC
Red Hat Issue Tracker RHCEPH-3353 0 None None None 2022-02-21 18:19:44 UTC

Internal Links: 1327628

Description Travis Rhoden 2015-07-17 16:33:37 UTC
When running "ceph-deploy osd prepare" on block devices to serve as OSDs, the OSDs are actually activated as well, resulting in them being UP and IN the Ceph cluster.

This presents a few problems:

1) This is not what our documentation says it does
2) We instruct users to run prepare and the activate, but activate will return errors if the OSD is already active
3) The preferred method is to use "ceph-deploy osd create" for block devices

I'm not 100% certain why the prepare step results in activated OSDs, but suspect something with udev.

Let's sort all this out!

Comment 2 Vasu Kulkarni 2015-12-07 23:53:50 UTC
loic, you pointed out that ceph-disk prepare activates the osd's, this is the case for 7.1 but not for 7.2, probably you can sort this out here.

Comment 3 Loic Dachary 2015-12-08 08:52:06 UTC
@Vasu the original issue filed by travis seems to be about an inconsistency between some documentation and the actual behavior. What you're seeing with 7.2 is different: the OSD does not activate automatically as it should on your setup. Did you manage to reproduce this behavior from a freshly installed 7.2 ? It would be great to have a way to reproduce the problem.

Comment 4 François Cami 2015-12-09 10:41:55 UTC
These tests are two weeks old, but still, on freshly installed 7.2:

$ ceph-deploy osd prepare ceph-osd0:vdb
vdb gets split into two partitions. Calling
$ ceph-deploy osd activate ceph-osd0:vdb1:vdb2
=> OK

However calling activate after:
$ ceph-deploy osd prepare ceph-osd0:vdb:vdc1
leads to errors (and duplicated OSDs).

The only safe way I've found to activate OSDs explicitely declared journals on the CLI is to reboot the node - I'm assuming starting the service would be enough.

HTH


Note You need to log in before you can comment on or make changes to this bug.