Bug 1654660

Summary: [RFE] Colocation of different Ceph daemons on containerized deployment
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mikel Olasagasti <molasaga>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Rahul Lepakshi <rlepaksh>
Severity: medium Docs Contact: Ranjini M N <rmandyam>
Priority: medium    
Version: 3.1CC: adking, agunn, flucifre, gmeno, mhackett, pasik, rlepaksh, rmandyam, tserlin, vereddy, vhernand
Target Milestone: ---Keywords: FutureFeature
Target Release: 5.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.6-2.el8cp Doc Type: Enhancement
Doc Text:
.`cephadm` supports colocating multiple daemons on the same host With this release, multiple daemons, such as Ceph Object Gateway and Ceph Metadata Server (MDS), can be deployed on the same host thereby providing an additional performance benefit. .Example ---- service_type: rgw placement: label: rgw count-per-host: 2 ---- For single node deployments, `cephadm` requires atleast two running Ceph Manager daemons in upgrade scenarios. It is still highly recommended even outside of upgrade scenarios but the storage cluster will function without it.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-04 10:19:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1553254, 2031073    

Description Mikel Olasagasti 2018-11-29 11:18:58 UTC
Currently collocation of one daemon (RGW, MON/MGR, MDS, ...) with an OSD daemon in containerized deployments is supported[1][2].

Customer requests to support collocation of any different daemon in the same host avoiding the restriction that one of the daemon is an OSD.

Customer present the following scenario:

- In the current architecture they have 4 physical nodes (1 OSD and 3 OSD+MON) and 2 physical nodes (RGW). 
- They would like to have 3 physical nodes (RGW+MON) and 4 physical nodes (OSD). 
- They plan to increase the number of OSD nodes in the future.

[1] https://red.ht/2PYhlw9
[2] https://access.redhat.com/solutions/2109351

Comment 5 Giridhar Ramaraju 2019-08-05 13:11:52 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 6 Giridhar Ramaraju 2019-08-05 13:12:45 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 9 Federico Lucifredi 2020-01-23 04:07:47 UTC
With RHCS 4.0, the container co-location requirements have been further relaxed to allow a second co-located daemon in three-node clusters, in the following form:

N OSD + 1 additional scale out daemon + RGW
N OSD + 1 additional scale out daemon + RGW
N OSD + 1 additional scale out daemon + RGW

Arbitrary co-location of any daemon with any other daemon produces a combinatorial explosion of testing possibilities. The customer is invited to share what particular combination they are interested in for our review.

Comment 12 Sebastian Wagner 2021-10-13 15:31:56 UTC
This is somewhat of a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1894560 and is already implemented.

MGR: implemented but not supported
MON: not supported
RGW: supported
MDS: supported

Comment 16 Rahul Lepakshi 2021-12-17 07:25:19 UTC
Based on sanity runs from CI at 5.1 and previous releases too, we already have support for colocation of different daemons.
Moving BZ to verified

Comment 18 Sebastian Wagner 2022-01-27 09:11:21 UTC
upstreeam doc pr https://github.com/ceph/ceph/pull/44801

Comment 26 errata-xmlrpc 2022-04-04 10:19:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174