Bug 1936969

Summary: crash service is deployed with orphan-initial-daemons bootstrap option
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sunil Kumar Nagaraju <sunnagar>
Component: CephadmAssignee: Daniel Pivonka <dpivonka>
Status: CLOSED ERRATA QA Contact: Sunil Kumar Nagaraju <sunnagar>
Severity: low Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: kdreyer, sewagner, vereddy
Target Milestone: ---Keywords: Automation
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.1.0-736.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:28:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 1 Daniel Pivonka 2021-03-11 12:28:59 UTC
upstream fix PR: https://github.com/ceph/ceph/pull/39649

Comment 4 Sunil Kumar Nagaraju 2021-03-22 04:49:35 UTC
Did not see Crash service deployed after bootstrapping with orphan-initial-daemons, marking this BZ as Verified.

http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1616378582794/Cephadm_Bootstrap_0.log


2021-03-21 22:13:11,669 - ceph.ceph - INFO - Running command cephadm --verbose --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-74545-20210316000345 bootstrap --registry-json /tmp/tmp7htroc3_.json --mon-ip 10.0.210.143 --orphan-initial-daemons --skip-monitoring-stack --initial-dashboard-user admin123 --initial-dashboard-password admin123 --fsid f64f341c-655d-11eb-8778-fa163e914bcc on 10.0.210.143

2021-03-21 22:15:12,428 - ceph.ceph - INFO - Running command cephadm -v shell -- ceph orch ls -f yaml on 10.0.210.143
2021-03-21 22:15:15,086 - ceph.ceph - INFO - Command completed successfully
2021-03-21 22:15:15,087 - ceph.ceph_admin - INFO - service_type: mgr
service_name: mgr
placement:
  count: 2
unmanaged: true
status:
  container_image_id: 9ee4da417c2767046ae756c00ea507f74f2ada7ff2ea150f3ff98803043b2101
  container_image_name: registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-74545-20210316000345
  created: '2021-03-22T02:17:00.444378Z'
  last_refresh: '2021-03-22T02:17:22.590286Z'
  running: 1
  size: 2
---
service_type: mon
service_name: mon
placement:
  count: 5
unmanaged: true
status:
  container_image_id: 9ee4da417c2767046ae756c00ea507f74f2ada7ff2ea150f3ff98803043b2101
  container_image_name: registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-74545-20210316000345
  created: '2021-03-22T02:16:58.862651Z'
  last_refresh: '2021-03-22T02:17:22.589906Z'
  running: 1
  size: 5

Comment 6 errata-xmlrpc 2021-08-30 08:28:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294