Bug 1936969 - crash service is deployed with orphan-initial-daemons bootstrap option
Summary: crash service is deployed with orphan-initial-daemons bootstrap option
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-09 15:10 UTC by Sunil Kumar Nagaraju
Modified: 2021-08-30 08:28 UTC (History)
3 users (show)

Fixed In Version: ceph-16.1.0-736.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:28:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1183 0 None None None 2021-08-30 00:15:53 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:28:57 UTC

Comment 1 Daniel Pivonka 2021-03-11 12:28:59 UTC
upstream fix PR: https://github.com/ceph/ceph/pull/39649

Comment 4 Sunil Kumar Nagaraju 2021-03-22 04:49:35 UTC
Did not see Crash service deployed after bootstrapping with orphan-initial-daemons, marking this BZ as Verified.

http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1616378582794/Cephadm_Bootstrap_0.log


2021-03-21 22:13:11,669 - ceph.ceph - INFO - Running command cephadm --verbose --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-74545-20210316000345 bootstrap --registry-json /tmp/tmp7htroc3_.json --mon-ip 10.0.210.143 --orphan-initial-daemons --skip-monitoring-stack --initial-dashboard-user admin123 --initial-dashboard-password admin123 --fsid f64f341c-655d-11eb-8778-fa163e914bcc on 10.0.210.143

2021-03-21 22:15:12,428 - ceph.ceph - INFO - Running command cephadm -v shell -- ceph orch ls -f yaml on 10.0.210.143
2021-03-21 22:15:15,086 - ceph.ceph - INFO - Command completed successfully
2021-03-21 22:15:15,087 - ceph.ceph_admin - INFO - service_type: mgr
service_name: mgr
placement:
  count: 2
unmanaged: true
status:
  container_image_id: 9ee4da417c2767046ae756c00ea507f74f2ada7ff2ea150f3ff98803043b2101
  container_image_name: registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-74545-20210316000345
  created: '2021-03-22T02:17:00.444378Z'
  last_refresh: '2021-03-22T02:17:22.590286Z'
  running: 1
  size: 2
---
service_type: mon
service_name: mon
placement:
  count: 5
unmanaged: true
status:
  container_image_id: 9ee4da417c2767046ae756c00ea507f74f2ada7ff2ea150f3ff98803043b2101
  container_image_name: registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-74545-20210316000345
  created: '2021-03-22T02:16:58.862651Z'
  last_refresh: '2021-03-22T02:17:22.589906Z'
  running: 1
  size: 5

Comment 6 errata-xmlrpc 2021-08-30 08:28:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.