Bug 1677431 - [RFE] Requesting playbooks to scale up and scale down Ceph Manager, RGW, and RBD-Mirror services
Summary: [RFE] Requesting playbooks to scale up and scale down Ceph Manager, RGW, and ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: 4.0
Assignee: Dimitri Savineau
QA Contact: Ameena Suhani S H
Erin Donnelly
URL:
Whiteboard:
Depends On: 1787516 1787567
Blocks: 1624388 1730176
TreeView+ depends on / blocked
 
Reported: 2019-02-14 21:55 UTC by John Fulton
Modified: 2020-01-31 12:46 UTC (History)
16 users (show)

Fixed In Version: ceph-ansible-4.0.7-1.el8cp, ceph-ansible-4.0.7-1.el7cp
Doc Type: Enhancement
Doc Text:
.Ansible playbooks for scaling of all Ceph services Previously, `ceph-ansible` playbooks offered limited scale up and scale down features only to core Ceph products, such as Monitors and OSDs. With this update, additional Ansible playbooks allow for scaling of all Ceph services.
Clone Of:
Environment:
Last Closed: 2020-01-31 12:45:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 4905 0 None closed shrink-rgw: refact global workflow 2020-09-30 18:47:56 UTC
Github ceph ceph-ansible pull 4907 0 None closed shrink-rgw: refact global workflow (bp #4905) 2020-09-30 18:47:55 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:46:06 UTC

Description John Fulton 2019-02-14 21:55:02 UTC
If a customer wishes to upgrade from RHEL7 to RHEL8 but RHEL8 doesn't offer an option to do an in-place upgrade and the server would need to be re-provisioned with RHEL8, then how will this be handled if each of those RHEL7 servers are parts of a Ceph cluster?

In that scenario, it seems better to tell Ceph that the service has been removed and add it back as if it's a new service than to simply turn the node off because Ceph will keep looking for the missing service.

ceph-ansible has infrastructure playbooks to remove parts of a cluster and add those parts back which are useful when nodes are reprovisioned. We don't have playbooks like this for all Ceph daemons however. This is an RFE to track a request for those missing daemons, particularly, the ones that Red Hat OpenStack Platform deploys.

I. scale up:

- ceph-osd: https://github.com/ceph/ceph-ansible/blob/master/infrastructure-playbooks/add-osd.yml
- ceph-mon: https://github.com/ceph/ceph-ansible/pull/3547
- ceph-mds: https://github.com/ceph/ceph-ansible/pull/3599
- ceph-rgw: missing
- ceph-rbd-mirror: missing
- ceph-mgr: missing

II. scale down:

- ceph-osd: https://github.com/ceph/ceph-ansible/blob/master/infrastructure-playbooks/shrink-osd.yml
- ceph-mon: https://github.com/ceph/ceph-ansible/blob/master/infrastructure-playbooks/shrink-mon.yml
- ceph-mds: missing
- ceph-rgw: missing
- ceph-rbd-mirror: missing
- ceph-mgr: missing

Comment 5 Rishabh Dave 2019-05-06 12:58:28 UTC
scale up MGR PR - https://github.com/ceph/ceph-ansible/pull/3931
scale up RBD mirror PR - https://github.com/ceph/ceph-ansible/pull/3807
scale up RGW PR - https://github.com/ceph/ceph-ansible/pull/3889

Comment 6 Rishabh Dave 2019-05-08 13:09:04 UTC
The above PRs are merged in master and stable-4.0 so this BZ is partially fulfilled.

Comment 8 Rishabh Dave 2019-06-03 06:00:27 UTC
Playbooks for scaling down MGR, RBD-mirror and RGW needs to be written.

Comment 10 Giridhar Ramaraju 2019-08-05 13:11:16 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 11 Giridhar Ramaraju 2019-08-05 13:12:16 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 34 errata-xmlrpc 2020-01-31 12:45:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312


Note You need to log in before you can comment on or make changes to this bug.