Bug 1881288 - osd daemon went down and no OSD daemon log in journalctl nor as a file
Summary: osd daemon went down and no OSD daemon log in journalctl nor as a file
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.1
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.2
Assignee: Dimitri Savineau
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-22 05:52 UTC by Vasishta
Modified: 2021-01-12 14:57 UTC (History)
9 users (show)

Fixed In Version: ceph-ansible-4.0.38-1.el8cp, ceph-ansible-4.0.38-1.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-12 14:57:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5959 0 None closed [skip ci] switch2container: disable ceph-osd enabled-runtime 2021-01-26 15:23:59 UTC
Red Hat Product Errata RHSA-2021:0081 0 None None None 2021-01-12 14:57:43 UTC

Description Vasishta 2020-09-22 05:52:32 UTC
Description of problem:
When a cluster was upgraded from 3.x to 4.x and filestore OSDs were converted to bluestore using ceph-ansible playbook, an OSD went offline in between and no logs were found.

Version-Release number of selected component (if applicable):
ceph-4.1-rhel-8-containers-candidate-73222-20200911013858
ceph version 14.2.8-108.el8cp
ceph-ansible-4.0.31-1.el7cp.noarch

How reproducible:
Tried once

Steps to Reproduce: (Steps followed)
1. Bring up containerized cluster of RHCS 3.x with filestore OSDs
2. Upgrade cluster to 4.x
3. Migrate filestore OSDs to bluestore

Actual results:
1) One of the OSDs went down
2) no OSD logs found

Expected results:
1) OSDs must not go down
2) Logs should be captured

Additional info:

Comment 19 Ameena Suhani S H 2020-11-23 17:26:10 UTC
Verified using
ansible-2.9.15-1.el7ae.noarch
ceph-ansible-4.0.40-1.el7cp.noarch

1. The journalctl logs are available  
2. The cluster was healthy, when performed reboot after switching from rpm to container

Comment 21 errata-xmlrpc 2021-01-12 14:57:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081


Note You need to log in before you can comment on or make changes to this bug.