Bug 1644623
| Summary: | [RFE] ‘osd_auto_discovery’ feature with batch command | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Sébastien Han <shan> |
| Component: | Ceph-Ansible | Assignee: | Guillaume Abrioux <gabrioux> |
| Status: | CLOSED ERRATA | QA Contact: | Ameena Suhani S H <amsyedha> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.2 | CC: | amsyedha, aschoen, ceph-eng-bugs, dsavinea, edonnell, gabrioux, gmeno, hnallurv, nthomas, rperiyas, shan, tchandra, tserlin, vashastr |
| Target Milestone: | z1 | Keywords: | FutureFeature |
| Target Release: | 3.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: ceph-ansible-3.2.26-1.el7cp Ubuntu: ceph-ansible_3.2.26-2redhat1 | Doc Type: | Enhancement |
| Doc Text: |
.`osd_auto_discovery` now works with the `batch` subcommand
Previously, when `osd_auto_discovery` was activated, the `batch` subcommand did not create OSDs as expected. With this update, when `batch` is used with `osd_auto_discovery`, all the devices found by the `ceph-ansible` utility become OSDs and are passed in `batch` as expected.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-22 13:29:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1726135 | ||
|
Description
Sébastien Han
2018-10-31 09:26:30 UTC
Assigning this to Andrew as a follow-up item since he implemented batch in ceph-ansible. (In reply to leseb from comment #3) > Assigning this to Andrew as a follow-up item since he implemented batch in > ceph-ansible. what is the target release? it shows currently 3.* and milestone as z1. Sorry Harish, target release is 3.2, updated accordingly. Andrew would you please tell us the status of this work ? (In reply to Gregory Meno from comment #7) > Andrew would you please tell us the status of this work ? This work was never started, I'd estimate it'd take around 2 - 3 days to complete. Seems to me like this should wait till RHCS3.3. Seb. it this in preparation for ceph-volume only or is there another need here that I'm missing ? What do you think? Yes, this brings back feature parity, and yes in preparation for the ceph-volume only. We can wait for 3.3, I think it'll be better so we don't do any backports in 3.2. I just changed the target accordingly, thanks Greg for the suggestion. VERIFIED using ceph-ansible-3.2.20-1.el7cp.noarch by adding new osds using add-osd.yml Moving to VERIFIED state Hi Guillaume, Sorry, Re-opening the BZ, When I tried to add new OSD, ceph-validate task failed saying "msg": "[magna033] Validation failed for variable: item[0]\n[magna033] Reason: -> item[0] key did not match 'lvm_volumes' (required item in schema is missing: lvm_volumes)\n", Regards, Vasishta Shastry QE, Ceph Hi, Verified using ceph-ansible-3.2.27-1.el7cp.noarch. by newly creating OSD and adding OSD. Moving to VERIFIED state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3173 |