Hide Forgot
Description of problem: When trying to roll out Ceph OSD-Nodes, that only have one blank Disk, RHSC2 reports "Success" when setting up the Node, but it will not show up as a part or the cluster When using OSD Nodes with two blank disks, at the beginning of the Roll out RHSC2 will report 2 ODSs and the capacity of both disks. In the end, the smaller Disk will be used as Journal Disk, and the OSD Node will only run with one ODS. Fix: RHSC *must* report to the user that: - It cannot deploy an OSD node with only one disk, since it requires a seperate Disk for the journal - When rolling out an OSD Node with two Disks, RHSC2 needs to report to the user, that one of the disks will be used as Journal optional: Make the Journal setup configurable prior to the OSD-Deployment. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Set up RHSC2 according to the Documentation 2. Create thre Mon Nodes (VMs) + Create a bunch of OSD Nodes (VMs) with one OS Disk and one unformated second Disk to be used as OSD 2a. alternative: create a bunch of OSD Nodes with two blank Disks (eg. one 100 GB and one 10 GB Disk) 3. install RHCS2 Agent to mons and OSDs 4. Deploy Ceph-Cluster 5. When using OSd Nodes with only one Disk, the roll out will report "success" but you will end up with a cluster that contains only monitors but no OSD Nodes 5a. You can try to add the OSD Nodes to the mon-only cluster: Again you will get no error, but a success message and still no ODSs in the Cluster 6. When doint the sema with the OSD Node with two disks: Prior to the Roll Out RHSC will report both disks as OSDs and the capacity of both. Afterwards, the cluster will run with only one OSD per OSD node. Additional info: Setup run on a fully virtualized (KVM/Libvirt) environment.
I don't see any work that needs to happen in ceph-installer or ceph-ansible for this bug.
This product is EOL now