Description of problem:
When a cluster was upgraded from 3.x to 4.x and filestore OSDs were converted to bluestore using ceph-ansible playbook, an OSD went offline in between and no logs were found.
Version-Release number of selected component (if applicable):
ceph-4.1-rhel-8-containers-candidate-73222-20200911013858
ceph version 14.2.8-108.el8cp
ceph-ansible-4.0.31-1.el7cp.noarch
How reproducible:
Tried once
Steps to Reproduce: (Steps followed)
1. Bring up containerized cluster of RHCS 3.x with filestore OSDs
2. Upgrade cluster to 4.x
3. Migrate filestore OSDs to bluestore
Actual results:
1) One of the OSDs went down
2) no OSD logs found
Expected results:
1) OSDs must not go down
2) Logs should be captured
Additional info:
Comment 19Ameena Suhani S H
2020-11-23 17:26:10 UTC
Verified using
ansible-2.9.15-1.el7ae.noarch
ceph-ansible-4.0.40-1.el7cp.noarch
1. The journalctl logs are available
2. The cluster was healthy, when performed reboot after switching from rpm to container
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2021:0081