Bug 1378415 - OSD installation fails on OSD nodes with just one blank Disk, but RHSC2 claims "success".
Summary: OSD installation fails on OSD nodes with just one blank Disk, but RHSC2 claim...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: Ceph Integration
Version: 2
Hardware: Unspecified
OS: Linux
unspecified
medium
Target Milestone: ---
: 3
Assignee: Nishanth Thomas
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-22 11:13 UTC by Andreas Stolzenberger
Modified: 2018-11-19 05:41 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-19 05:41:55 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Andreas Stolzenberger 2016-09-22 11:13:42 UTC
Description of problem:

When trying to roll out Ceph OSD-Nodes, that only have one blank Disk, RHSC2 reports "Success" when setting up the Node, but it will not show up as a part or the cluster

When using OSD Nodes with two blank disks, at the beginning of the Roll out RHSC2 will report 2 ODSs and the capacity of both disks.
In the end, the smaller Disk will be used as Journal Disk, and the OSD Node will only run with one ODS.

Fix: RHSC *must* report to the user that:
- It cannot deploy an OSD node with only one disk, since it requires a seperate Disk for the journal

- When rolling out an OSD Node with two Disks, RHSC2 needs to report to the user, that one of the disks will be used as Journal

optional:

Make the Journal setup configurable prior to the OSD-Deployment.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Set up RHSC2 according to the Documentation
2. Create thre Mon Nodes (VMs) + Create a bunch of OSD Nodes (VMs) with one OS Disk and one unformated second Disk to be used as OSD
2a. alternative: create a bunch of OSD Nodes with two blank Disks (eg. one 100 GB and one 10 GB Disk)
3. install RHCS2 Agent to mons and OSDs
4. Deploy Ceph-Cluster 

5. When using OSd Nodes with only one Disk, the roll out will report "success" but you will end up with a cluster that contains only monitors but no OSD Nodes
5a. You can try to add the OSD Nodes to the mon-only cluster: Again you will get no error, but a success message and still no ODSs in the Cluster

6. When doint the sema with the OSD Node with two disks: Prior to the Roll Out RHSC will report both disks as OSDs and the capacity of both. Afterwards, the cluster will run with only one OSD per OSD node.


Additional info:
Setup run on a fully virtualized (KVM/Libvirt) environment.

Comment 3 Ken Dreyer (Red Hat) 2017-03-02 17:37:26 UTC
I don't see any work that needs to happen in ceph-installer or ceph-ansible for this bug.

Comment 4 Shubhendu Tripathi 2018-11-19 05:41:55 UTC
This product is EOL now


Note You need to log in before you can comment on or make changes to this bug.