Bug 1850955

Summary: add-osd playbook failied
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manjunatha <mmanjuna>
Component: Ceph-VolumeAssignee: Andrew Schoen <aschoen>
Status: CLOSED ERRATA QA Contact: Manasa <mgowri>
Severity: medium Docs Contact: Aron Gunn <agunn>
Priority: high    
Version: 4.0CC: agunn, aschoen, ceph-eng-bugs, ceph-qe-bugs, gabrioux, gmeno, gsitlani, mgowri, mhackett, njajodia, nthomas, takirby, tserlin, vumrao, ykaul
Target Milestone: z2   
Target Release: 4.1   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-14.2.8-99.el8cp, ceph-14.2.8-99.el7cp Doc Type: Bug Fix
Doc Text:
.The `ceph-volume` command is treating a logical volume as a raw device The `ceph-volume` command was treating a logical volume as a raw device, which was causing the `add-osds.yml` playbook to fail. This was not allowing additional Ceph OSD to be added to the storage cluster. With this release, a code bug was fixed in `ceph-volume` so it handles logical volumes properly, and the `add-osds.yml` playbook can be used to add Ceph OSDs to the storage cluster.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-08 17:14:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1816167    

Comment 50 Manasa 2020-09-07 09:11:39 UTC
Tested add-osd playbook and site.yml playbook with --limit osds option using the ansible and ceph versions given below.

ansible-2.8.15-1.el8ae.noarch
ceph-ansible-4.0.25.2-1.el8cp.noarch

ceph version 14.2.8-91.el8cp 

The playbooks executed successfully and the OSDs were added to the cluster.

Comment 53 Manasa 2020-09-07 10:11:03 UTC
Attached logs for verification of playbooks for add osd scenario.

Comment 57 errata-xmlrpc 2020-09-30 17:26:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144

Comment 62 Red Hat Bugzilla 2023-09-14 06:02:54 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days