Bug 1644623

Summary: [RFE] ‘osd_auto_discovery’ feature with batch command
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sébastien Han <shan>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: amsyedha, aschoen, ceph-eng-bugs, dsavinea, edonnell, gabrioux, gmeno, hnallurv, nthomas, rperiyas, shan, tchandra, tserlin, vashastr
Target Milestone: z1Keywords: FutureFeature
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.26-1.el7cp Ubuntu: ceph-ansible_3.2.26-2redhat1 Doc Type: Enhancement
Doc Text:
.`osd_auto_discovery` now works with the `batch` subcommand Previously, when `osd_auto_discovery` was activated, the `batch` subcommand did not create OSDs as expected. With this update, when `batch` is used with `osd_auto_discovery`, all the devices found by the `ceph-ansible` utility become OSDs and are passed in `batch` as expected.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-22 13:29:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1726135    

Description Sébastien Han 2018-10-31 09:26:30 UTC
Description of problem:

This is a follow up on https://bugzilla.redhat.com/show_bug.cgi?id=1631729.


Actual results:

If 'osd_auto_discovery' is activated no OSD will be created with 'batch'


Expected results:

All the devices found by ansible should become OSDs and be passed in batch.


Additional info:

Comment 3 Sébastien Han 2018-10-31 09:27:39 UTC
Assigning this to Andrew as a follow-up item since he implemented batch in ceph-ansible.

Comment 5 Harish NV Rao 2018-10-31 09:35:59 UTC
(In reply to leseb from comment #3)
> Assigning this to Andrew as a follow-up item since he implemented batch in
> ceph-ansible.

what is the target release? it shows currently 3.* and milestone as z1.

Comment 6 Sébastien Han 2018-10-31 09:40:17 UTC
Sorry Harish, target release is 3.2, updated accordingly.

Comment 7 Christina Meno 2019-01-09 23:07:53 UTC
Andrew would you please tell us the status of this work ?

Comment 8 Andrew Schoen 2019-01-10 16:03:38 UTC
(In reply to Gregory Meno from comment #7)
> Andrew would you please tell us the status of this work ?

This work was never started, I'd estimate it'd take around 2 - 3 days to complete.

Comment 9 Christina Meno 2019-01-14 21:38:45 UTC
Seems to me like this should wait till RHCS3.3. Seb. it this in preparation for ceph-volume only or is there another need here that I'm missing ?
What do you think?

Comment 10 Sébastien Han 2019-01-16 09:18:30 UTC
Yes, this brings back feature parity, and yes in preparation for the ceph-volume only.
We can wait for 3.3, I think it'll be better so we don't do any backports in 3.2.

I just changed the target accordingly, thanks Greg for the suggestion.

Comment 15 Vasishta 2019-07-19 08:46:17 UTC
VERIFIED using ceph-ansible-3.2.20-1.el7cp.noarch by adding new osds using add-osd.yml
Moving to VERIFIED state

Comment 16 Vasishta 2019-08-12 05:45:53 UTC
Hi Guillaume,

Sorry, Re-opening the BZ, When I tried to add new OSD, ceph-validate task failed saying

"msg": "[magna033] Validation failed for variable: item[0]\n[magna033] Reason: -> item[0] key did not match 'lvm_volumes' (required item in schema is missing: lvm_volumes)\n",


Regards,
Vasishta Shastry
QE, Ceph

Comment 26 Ameena Suhani S H 2019-09-26 04:47:34 UTC
Hi,

Verified using ceph-ansible-3.2.27-1.el7cp.noarch. by newly creating OSD and adding OSD.

Moving to VERIFIED state.

Comment 28 errata-xmlrpc 2019-10-22 13:29:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3173