Description of problem:
[5.3z7][Deployment]OSD not coming up
For latest build received for 5.3z7 osd not coming up after bootstrap
Version: 16.2.10-253
REPO: http://download-01.beak-001.prod.iad2.dc.redhat.com/rhel-8/composes/auto/ceph-5.3-rhel-8/RHCEPH-5.3-RHEL-8-20240509.ci.0
CONTAINER_IMAGE: registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.3-rhel-8-containers-candidate-97321-20240509213633
Bootstrap completed:
====================
# cephadm --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.3-rhel-8-containers-candidate-97321-20240509213633 bootstrap --mon-ip 10.0.209.126
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.3-rhel-8-containers-candidate-97321-20240509213633...
Ceph version: ceph version 16.2.10-253.el8cp (052b07ab0ecb651a9a2692096ba31121fb0c19d9) pacific (stable)
Extracting ceph user uid/gid from container image...
Or, if you are only running a single cluster on this host:
sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/en/pacific/mgr/telemetry/
Bootstrap complete.
OSD deployment: OSD not coming up
==================================
# ceph orch device ls
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
ceph-mobisht-smb-upstream-t3js66-node1-installer /dev/vdb hdd 87f8d426-584e-4de5-b 16.1G Yes 33s ago
ceph-mobisht-smb-upstream-t3js66-node1-installer /dev/vdc hdd 6be3d721-7a5e-4bf1-8 16.1G Yes 33s ago
ceph-mobisht-smb-upstream-t3js66-node1-installer /dev/vdd hdd 0c027691-331b-4831-b 16.1G Yes 33s ago
ceph-mobisht-smb-upstream-t3js66-node1-installer /dev/vde hdd 2c90da47-0fce-4574-9 16.1G Yes 33s ago
ceph-mobisht-smb-upstream-t3js66-node1-installer /dev/vdf hdd 92fb83fc-373f-4400-a 16.1G Yes 33s ago
ceph-mobisht-smb-upstream-t3js66-node1-installer /dev/vdg hdd a11ab656-7229-4e85-a 16.1G Yes 33s ago
# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-mobisht-smb-upstream-t3js66-node1-installer 10.0.209.126 _admin osd installer crash node-exporter grafana mgr alertmanager mon prometheus
ceph-mobisht-smb-upstream-t3js66-node2 10.0.211.53 osd crash node-exporter mgr alertmanager mds mon rgw
ceph-mobisht-smb-upstream-t3js66-node3 10.0.208.176 osd crash node-exporter mds mon rgw
3 hosts in cluster
# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
# ceph orch ls | grep osd
osd.all-available-devices 0 - 13m *
Version-Release number of selected component (if applicable):
16.2.10-253
How reproducible:
Always
Steps to Reproduce:
1. Bootstrap cluster with 5.3z7 build
2. Deploy osd with command ceph orch apply osd --all-available-devices
Actual results:
osd not coming up
Expected results:
osd should come up
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: Red Hat Ceph Storage 5.3 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2024:4118
Comment 17Red Hat Bugzilla
2024-10-25 04:25:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days