Bug 1940335 - [cephadm] - 5.0 - Applying services using --placement option does not apply mon daemon when there are two nodes to be applied as mon
Summary: [cephadm] - 5.0 - Applying services using --placement option does not apply m...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Adam King
QA Contact: Manasa
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-18 08:37 UTC by Manasa
Modified: 2021-08-30 08:29 UTC (History)
4 users (show)

Fixed In Version: ceph-16.1.0-997.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:29:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1188 0 None None None 2021-08-30 00:16:03 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:29:19 UTC

Description Manasa 2021-03-18 08:37:22 UTC
Description of problem:
After bootstrap when we try to apply mon daemon on two nodes, the monitor does not get applied on the two of them

Version-Release number of selected component (if applicable):
ceph version 16.1.0-603.el8cp

How reproducible:
3/3

Steps to Reproduce:
1. Bootstrap a ceph cluster using cephadm
At this stage there is only one node in the cluster
2. Use placement option for mon "ceph orch apply mon --placement=3"
3. Add the second node to the cluster "ceph orch host add <hostname>"
4. Check if the second node gets automatically deployed as mon

Actual results:
The second added node does not get deployed as mon. The following error is seen in mgr logs.

Deploying 1 monitor(s) instead of 2 so monitors may achieve consensus

Expected results:
The second node gets deployed as mon

Additional info:
This issue does not seem to appear in upstream.

Comment 2 Adam King 2021-03-18 15:15:38 UTC
this was fixed in:  https://github.com/ceph/ceph/pull/39979
and backported to Pacific in: https://github.com/ceph/ceph/pull/40135

The backport to Pacific was merged on March 16th so any new downstream images made on March 17th onward should not have this issue anymore.
Once you have a chance to test this with a new image let us know if the problem is fixed or if it is still happening.

Comment 3 Manasa 2021-03-19 06:38:43 UTC
Build is not available yet.

Comment 4 Ken Dreyer (Red Hat) 2021-03-19 18:27:33 UTC
Sage backported PR 39979 to pacific in https://github.com/ceph/ceph/pull/40135. This will be in the next weekly rebase I build downstream (March 22nd).

Comment 7 Manasa 2021-03-24 11:59:36 UTC
Verified using the latest ceph compose. 
ceph version 16.1.0-1084.el8cp (899d93a5c7913d6952438f4b48d29d1cef2aaa2a) pacific (rc)

The functionality works successfully. Verified the bug.

Comment 9 errata-xmlrpc 2021-08-30 08:29:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.