Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1862416

Summary: [GSS][ceph-ansible] Playbook infrastructure-playbooks/shrink-osd.yml failing for Filestore OSD in RHCS 4.1 containerized environment.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Janmejay Singh <jansingh>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Manasa <mgowri>
Severity: medium Docs Contact: Aron Gunn <agunn>
Priority: unspecified    
Version: 4.1CC: agunn, aschoen, ceph-eng-bugs, gabrioux, gmeno, gsitlani, mgowri, mmuench, nthomas, tpetr, tserlin, vereddy, ykaul
Target Milestone: z2   
Target Release: 4.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-4.0.29-1.el8cp, ceph-ansible-4.0.29-1.el7cp Doc Type: Bug Fix
Doc Text:
.Ceph Ansible's `shrink-osd.yml` playbook fails when using FileStore in a containerized environment A default value was missing in Ceph Ansible's `shrink-osd.yml` playbook, which was causing a failure when shrinking a FileStore-backed Ceph OSD in a containerized environment. A previously prepared Ceph OSD using `ceph-disk` and `dmcrypt`, was leaving the `encrypted` key undefined in the corresponding Ceph OSD file. With this release, a default value was added so the Ceph Ansible `shrink-osd.yml` playbook can ran on Ceph OSD that have been prepared using `dmcrypt` in containerized environments.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-30 17:26:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1816167    

Comment 12 errata-xmlrpc 2020-09-30 17:26:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144