Created attachment 1417857 [details] File contains contents of ansible-playbook log Description of problem: When playbook was initiated to add nodes, the task 'create filestore osds with dedicated journal' failed trying to create OSD on lvs and disk partitions which are being used by existing OSDs. Version-Release number of selected component (if applicable): ceph-ansible-3.0.28-1.el7cp.noarch How reproducible: Always (1/1) Steps to Reproduce: 1.Configure ceph-ansible to initiate a ceph cluster with at least OSD with lv as data part and a disk partition as journal part 2. Once the Cluster is up, rerun playbook Actual results: TASK [ceph-osd : use ceph-volume to create filestore osds with dedicated journals] is trying to create OSD on lv and disk partition which is already being used by another OSD Expected results: Task must be skipped
The PRs that fix this have not been backported to the stable-3.0 branch. However, even if they were you could not use a partition or raw device for 'data' and expect the playbook to be idempotent until https://github.com/ceph/ceph/pull/20620 makes it into a release.
That PR 20620 will be in Ceph v12.2.5 upstream.
(In reply to Vasishta from comment #0) > Description of problem: > When playbook was initiated to add nodes, the task 'create filestore osds > with dedicated journal' failed trying to create OSD on lvs and disk > partitions which are being used by existing OSDs. > With this issue, user won't be able to successfully add new nodes to the cluster with OSDs having data part on logical volumes and journal on disk partitions.
(In reply to Ken Dreyer (Red Hat) from comment #4) > That PR 20620 will be in Ceph v12.2.5 upstream. @Ken, it means we will not have the fix for this in z2? As per comment 5, this bug limits the ability to expand the cluster. Is there a way we can get the fix in z2?
Created attachment 1419115 [details] File contains contents ansible-playbook log Not able to expand cluster even when data and journal were on logical volumes. Failing while running same task which should have skipped as per my understanding. $ cat /usr/share/ceph-ansible/group_vars/osds.yml | egrep -v ^# | grep -v ^$ --- dummy: copy_admin_key: true osd_scenario: lvm lvm_volumes: - data: data1 data_vg: d_vg journal: journal1 journal_vg: j_vg - data: data2 data_vg: d_vg journal: journal2 journal_vg: j_vg - data: data3 data_vg: d_vg journal: journal3 journal_vg: j_vg
Not sure I fully got what happened here, Andrew has more knowledge than me on that ceph-ansible code and on the BZ itself. Andrew, could you please fill out the Doc Text field for me? Thanks
We have to fix the idempotent nature of rerunning the playbook here, we use that for other add/remove operations.
Fixed in https://github.com/ceph/ceph-ansible/releases/tag/v3.2.0rc1
lgtm, thanks
Working fine with lvm-batch scenario, moving to VERIFIED state. Regards, Vasishta Shastry QE, Ceph
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0020