Description of problem: Configuring OSD with ceph-volume is documented in the Rados Block Device Guide for 3.0. It should actually be located in the Red Hat Ceph Storage Administration guide since we support Ceph-Volume with lvm plugin in RHCS 3.0. Block Device Guide: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/block_device_guide/#using-the-ceph-volume-utility-to-deploy-osds Should actually be in Administration Guide: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/administration_guide/managing_cluster_size Version-Release number of selected component (if applicable): 3.0
Also, this documentation needs to be expanded to guide users on how to use ceph-volume from ceph-ansible.
Worth noting that the current RHCS 3 Release Notes contain reference to the Block Device Guide as well Support for deploying logical volumes as OSDs A new utility, ceph-volume, is now supported. The utility enables deployment of logical volumes as OSDs on Red Hat Enterprise Linux. For details, see the Using the ceph-volume Utility to Deploy OSDs chapter in the Block Device Guide for Red Hat Ceph Storage
Hi Aron, The document looks good for manual installation but ansible installation steps are missing, would request to add the steps. Some ansible steps: 1. update in /etc/ansible/hosts file in OSD section. [osds] OSD_Host_1 osd_scenario="lvm" lvm_volumes="[{'data': 'data_lv', 'data_vg': 'example_vg', 'journal': 'journal_lv', 'journal_vg': 'example_vg'}]" OSD_Host_2 osd_scenario="lvm" lvm_volumes="[{'data': 'data_lv','data_vg': 'example_vg', 'journal': '/dev/sdb1'}]" Note: Prior running playbook make sure lvm's and partitions are available in the respective OSD nodes. 2. command to run ansible playbook ansible-playbook site.yml --limit osds
Moving the bug to verified state.