Bug 1564214 - [ceph-ansible] : osd scenario -lvm : playbook failing when initiated second time
Summary: [ceph-ansible] : osd scenario -lvm : playbook failing when initiated second time
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.2
Assignee: Sébastien Han
QA Contact: Vasishta
Bara Ancincova
URL:
Whiteboard:
Depends On:
Blocks: 1557269 1629656
TreeView+ depends on / blocked
 
Reported: 2018-04-05 16:52 UTC by Vasishta
Modified: 2019-01-03 19:01 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.0-0.1.rc1.el7cp Ubuntu: ceph-ansible_3.2.0~rc1-2redhat1
Doc Type: Bug Fix
Doc Text:
.Expanding clusters deployed with `osd_scenario: lvm` works Previously, the `ceph-ansible` utility could not expand a cluster that was deployed by using the `osd_scenario: lvm` option. The underlying source code has been modified, and clusters deployed with `osd_scenario: lvm` can be expanded as expected.
Clone Of:
Environment:
Last Closed: 2019-01-03 19:01:22 UTC
Embargoed:


Attachments (Terms of Use)
File contains contents of ansible-playbook log (873.44 KB, text/plain)
2018-04-05 16:52 UTC, Vasishta
no flags Details
File contains contents ansible-playbook log (476.98 KB, text/plain)
2018-04-09 04:53 UTC, Vasishta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 23140 0 None None None 2018-04-05 20:29:07 UTC
Red Hat Product Errata RHBA-2019:0020 0 None None None 2019-01-03 19:01:45 UTC

Description Vasishta 2018-04-05 16:52:13 UTC
Created attachment 1417857 [details]
File contains contents of ansible-playbook log

Description of problem:
When playbook was initiated to add nodes, the task 'create filestore osds with dedicated journal' failed trying to create OSD on lvs and disk partitions which are being used by existing OSDs.

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.28-1.el7cp.noarch

How reproducible:
Always (1/1)

Steps to Reproduce:
1.Configure ceph-ansible to initiate a ceph cluster with at least OSD with lv as data part and a disk partition as journal part
2. Once the Cluster is up, rerun playbook

Actual results:
TASK [ceph-osd : use ceph-volume to create filestore osds with dedicated journals] is trying to create OSD on lv and disk partition which is already being used by another OSD

Expected results:
Task must be skipped

Comment 3 Andrew Schoen 2018-04-05 19:26:21 UTC
The PRs that fix this have not been backported to the stable-3.0 branch. However, even if they were you could not use a partition or raw device for 'data' and expect the playbook to be idempotent until https://github.com/ceph/ceph/pull/20620 makes it into a release.

Comment 4 Ken Dreyer (Red Hat) 2018-04-05 20:29:08 UTC
That PR 20620 will be in Ceph v12.2.5 upstream.

Comment 5 Vasishta 2018-04-06 06:29:57 UTC
(In reply to Vasishta from comment #0)
 
> Description of problem:
> When playbook was initiated to add nodes, the task 'create filestore osds
> with dedicated journal' failed trying to create OSD on lvs and disk
> partitions which are being used by existing OSDs.
> 

With this issue, user won't be able to successfully add new nodes to the cluster with OSDs having data part on logical volumes and journal on disk partitions.

Comment 6 Harish NV Rao 2018-04-06 09:09:54 UTC
(In reply to Ken Dreyer (Red Hat) from comment #4)
> That PR 20620 will be in Ceph v12.2.5 upstream.

@Ken, it means we will not have the fix for this in z2?

As per comment 5, this bug limits the ability to expand the cluster. Is there a way we can get the fix in z2?

Comment 9 Vasishta 2018-04-09 04:53:11 UTC
Created attachment 1419115 [details]
File contains contents ansible-playbook log

Not able to expand cluster even when data and journal were on logical volumes.

Failing while running same task which should have skipped as per my understanding.

$ cat /usr/share/ceph-ansible/group_vars/osds.yml | egrep -v ^# | grep -v ^$
---
dummy:
copy_admin_key: true
osd_scenario: lvm
lvm_volumes:
   - data: data1
     data_vg: d_vg
     journal: journal1
     journal_vg: j_vg
   - data: data2
     data_vg: d_vg
     journal: journal2
     journal_vg: j_vg
   - data: data3
     data_vg: d_vg
     journal: journal3
     journal_vg: j_vg

Comment 11 Sébastien Han 2018-04-19 08:55:37 UTC
Not sure I fully got what happened here, Andrew has more knowledge than me on that ceph-ansible code and on the BZ itself.

Andrew, could you please fill out the Doc Text field for me?
Thanks

Comment 12 Vasu Kulkarni 2018-10-30 18:36:25 UTC
We have to fix the idempotent nature of rerunning the playbook here, we use that for other add/remove operations.

Comment 13 Sébastien Han 2018-10-31 10:44:16 UTC
Fixed in https://github.com/ceph/ceph-ansible/releases/tag/v3.2.0rc1

Comment 16 Sébastien Han 2018-11-16 15:45:34 UTC
lgtm, thanks

Comment 17 Vasishta 2018-11-28 02:18:11 UTC
Working fine with lvm-batch scenario, moving to VERIFIED state.

Regards,
Vasishta Shastry
QE, Ceph

Comment 19 errata-xmlrpc 2019-01-03 19:01:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020


Note You need to log in before you can comment on or make changes to this bug.