Bug 1378415

Summary: OSD installation fails on OSD nodes with just one blank Disk, but RHSC2 claims "success".
Product: Red Hat Storage Console Reporter: Andreas Stolzenberger <astolzen>
Component: Ceph IntegrationAssignee: Nishanth Thomas <nthomas>
Status: CLOSED WONTFIX QA Contact: sds-qe-bugs
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, julim, kdreyer, mkarnik, nthomas, sankarshan
Target Milestone: ---   
Target Release: 3   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 05:41:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Andreas Stolzenberger 2016-09-22 11:13:42 UTC
Description of problem:

When trying to roll out Ceph OSD-Nodes, that only have one blank Disk, RHSC2 reports "Success" when setting up the Node, but it will not show up as a part or the cluster

When using OSD Nodes with two blank disks, at the beginning of the Roll out RHSC2 will report 2 ODSs and the capacity of both disks.
In the end, the smaller Disk will be used as Journal Disk, and the OSD Node will only run with one ODS.

Fix: RHSC *must* report to the user that:
- It cannot deploy an OSD node with only one disk, since it requires a seperate Disk for the journal

- When rolling out an OSD Node with two Disks, RHSC2 needs to report to the user, that one of the disks will be used as Journal

optional:

Make the Journal setup configurable prior to the OSD-Deployment.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Set up RHSC2 according to the Documentation
2. Create thre Mon Nodes (VMs) + Create a bunch of OSD Nodes (VMs) with one OS Disk and one unformated second Disk to be used as OSD
2a. alternative: create a bunch of OSD Nodes with two blank Disks (eg. one 100 GB and one 10 GB Disk)
3. install RHCS2 Agent to mons and OSDs
4. Deploy Ceph-Cluster 

5. When using OSd Nodes with only one Disk, the roll out will report "success" but you will end up with a cluster that contains only monitors but no OSD Nodes
5a. You can try to add the OSD Nodes to the mon-only cluster: Again you will get no error, but a success message and still no ODSs in the Cluster

6. When doint the sema with the OSD Node with two disks: Prior to the Roll Out RHSC will report both disks as OSDs and the capacity of both. Afterwards, the cluster will run with only one OSD per OSD node.


Additional info:
Setup run on a fully virtualized (KVM/Libvirt) environment.

Comment 3 Ken Dreyer (Red Hat) 2017-03-02 17:37:26 UTC
I don't see any work that needs to happen in ceph-installer or ceph-ansible for this bug.

Comment 4 Shubhendu Tripathi 2018-11-19 05:41:55 UTC
This product is EOL now